Test Report: Docker_Linux_crio_arm64 22112

                    
                      236742b414df344dfb04283ee96fef673bd34cb2:2025-12-12:42745
                    
                

Test fail (48/316)

Order failed test Duration
38 TestAddons/serial/Volcano 0.32
44 TestAddons/parallel/Registry 15.07
45 TestAddons/parallel/RegistryCreds 0.54
46 TestAddons/parallel/Ingress 144.69
47 TestAddons/parallel/InspektorGadget 5.27
48 TestAddons/parallel/MetricsServer 5.37
50 TestAddons/parallel/CSI 37.54
51 TestAddons/parallel/Headlamp 3.28
52 TestAddons/parallel/CloudSpanner 6.29
53 TestAddons/parallel/LocalPath 9.45
54 TestAddons/parallel/NvidiaDevicePlugin 6.29
55 TestAddons/parallel/Yakd 6.27
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 502.31
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 369.24
175 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 2.5
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 2.64
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 2.5
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 734.29
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 2.32
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 0.06
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 1.72
197 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 3.15
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 2.36
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 241.69
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 1.57
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 0.1
223 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 101.92
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.05
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.26
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.26
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.26
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.25
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.26
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port 2.26
275 TestMultiControlPlane/serial/RestartClusterKeepsNodes 507.13
276 TestMultiControlPlane/serial/DeleteSecondaryNode 2.45
277 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 2.31
278 TestMultiControlPlane/serial/StopCluster 2.92
279 TestMultiControlPlane/serial/RestartCluster 95.77
280 TestMultiControlPlane/serial/DegradedAfterClusterRestart 4.52
281 TestMultiControlPlane/serial/AddSecondaryNode 91.21
282 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 4.74
293 TestJSONOutput/pause/Command 2.16
299 TestJSONOutput/unpause/Command 2.24
358 TestKubernetesUpgrade 793.51
384 TestPause/serial/Pause 6.99
440 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7200.086
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable volcano --alsologtostderr -v=1: exit status 11 (314.777954ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:12:43.358427  371747 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:43.360090  371747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:43.360155  371747 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:43.360178  371747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:43.360593  371747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:12:43.360963  371747 mustload.go:66] Loading cluster: addons-603031
	I1212 20:12:43.361421  371747 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:43.361463  371747 addons.go:622] checking whether the cluster is paused
	I1212 20:12:43.361605  371747 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:43.361632  371747 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:12:43.362181  371747 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:12:43.391888  371747 ssh_runner.go:195] Run: systemctl --version
	I1212 20:12:43.391960  371747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:12:43.412488  371747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:12:43.518910  371747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:12:43.519052  371747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:12:43.551946  371747 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:12:43.551970  371747 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:12:43.551976  371747 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:12:43.551980  371747 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:12:43.551983  371747 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:12:43.551989  371747 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:12:43.551992  371747 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:12:43.551995  371747 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:12:43.551998  371747 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:12:43.552005  371747 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:12:43.552009  371747 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:12:43.552013  371747 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:12:43.552016  371747 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:12:43.552019  371747 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:12:43.552022  371747 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:12:43.552028  371747 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:12:43.552035  371747 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:12:43.552040  371747 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:12:43.552045  371747 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:12:43.552048  371747 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:12:43.552053  371747 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:12:43.552056  371747 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:12:43.552059  371747 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:12:43.552062  371747 cri.go:89] found id: ""
	I1212 20:12:43.552114  371747 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:43.569144  371747 out.go:203] 
	W1212 20:12:43.571982  371747 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:12:43.572003  371747 out.go:285] * 
	* 
	W1212 20:12:43.577280  371747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:12:43.580314  371747 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.07s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 10.300125ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004132675s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003902037s
addons_test.go:394: (dbg) Run:  kubectl --context addons-603031 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-603031 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-603031 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.549325179s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 ip
2025/12/12 20:13:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable registry --alsologtostderr -v=1: exit status 11 (261.171232ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:08.746565  372666 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:08.748114  372666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:08.748132  372666 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:08.748139  372666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:08.748462  372666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:08.748779  372666 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:08.749165  372666 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:08.749183  372666 addons.go:622] checking whether the cluster is paused
	I1212 20:13:08.749288  372666 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:08.749302  372666 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:08.749802  372666 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:08.767878  372666 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:08.767947  372666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:08.786590  372666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:08.891107  372666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:08.891185  372666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:08.921952  372666 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:08.921973  372666 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:08.921978  372666 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:08.921982  372666 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:08.921985  372666 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:08.921989  372666 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:08.922002  372666 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:08.922006  372666 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:08.922009  372666 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:08.922016  372666 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:08.922022  372666 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:08.922026  372666 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:08.922029  372666 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:08.922032  372666 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:08.922035  372666 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:08.922040  372666 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:08.922046  372666 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:08.922049  372666 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:08.922052  372666 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:08.922055  372666 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:08.922059  372666 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:08.922062  372666 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:08.922066  372666 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:08.922069  372666 cri.go:89] found id: ""
	I1212 20:13:08.922121  372666 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:08.937392  372666 out.go:203] 
	W1212 20:13:08.940399  372666 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:08.940431  372666 out.go:285] * 
	* 
	W1212 20:13:08.945666  372666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:08.948754  372666 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.07s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 10.484127ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-603031
addons_test.go:334: (dbg) Run:  kubectl --context addons-603031 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (296.247142ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:14:01.513794  374194 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:14:01.514733  374194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:14:01.514749  374194 out.go:374] Setting ErrFile to fd 2...
	I1212 20:14:01.514755  374194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:14:01.515055  374194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:14:01.515436  374194 mustload.go:66] Loading cluster: addons-603031
	I1212 20:14:01.515860  374194 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:14:01.515881  374194 addons.go:622] checking whether the cluster is paused
	I1212 20:14:01.516003  374194 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:14:01.516019  374194 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:14:01.516762  374194 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:14:01.541674  374194 ssh_runner.go:195] Run: systemctl --version
	I1212 20:14:01.541728  374194 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:14:01.565784  374194 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:14:01.687801  374194 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:14:01.687922  374194 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:14:01.721098  374194 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:14:01.721122  374194 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:14:01.721128  374194 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:14:01.721132  374194 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:14:01.721135  374194 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:14:01.721139  374194 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:14:01.721143  374194 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:14:01.721146  374194 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:14:01.721149  374194 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:14:01.721155  374194 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:14:01.721161  374194 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:14:01.721172  374194 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:14:01.721176  374194 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:14:01.721179  374194 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:14:01.721183  374194 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:14:01.721196  374194 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:14:01.721200  374194 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:14:01.721203  374194 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:14:01.721206  374194 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:14:01.721209  374194 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:14:01.721214  374194 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:14:01.721217  374194 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:14:01.721220  374194 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:14:01.721223  374194 cri.go:89] found id: ""
	I1212 20:14:01.721279  374194 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:14:01.741954  374194 out.go:203] 
	W1212 20:14:01.744975  374194 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:14:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:14:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:14:01.745005  374194 out.go:285] * 
	* 
	W1212 20:14:01.750151  374194 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:14:01.753071  374194 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.54s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (144.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-603031 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-603031 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-603031 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [71d36b1f-89e4-4628-9189-e847b9d42f68] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [71d36b1f-89e4-4628-9189-e847b9d42f68] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00319666s
I1212 20:13:32.109396  364853 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.608904083s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-603031 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-603031
helpers_test.go:244: (dbg) docker inspect addons-603031:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d",
	        "Created": "2025-12-12T20:10:25.50131524Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:10:25.588980896Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/hosts",
	        "LogPath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d-json.log",
	        "Name": "/addons-603031",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-603031:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-603031",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d",
	                "LowerDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-603031",
	                "Source": "/var/lib/docker/volumes/addons-603031/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-603031",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-603031",
	                "name.minikube.sigs.k8s.io": "addons-603031",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0a01ffc4a7512eb3bb17f18f3d2fb2ff623e6bdc5de8cbfda60b5df285c6f8f7",
	            "SandboxKey": "/var/run/docker/netns/0a01ffc4a751",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-603031": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:8f:d6:79:11:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e609896b1bb2f4e51b06a0aeeafa65d36f83a53d2d0617984d4f134269288e0",
	                    "EndpointID": "f511be285609bfdbd3e0fdb964a3b44b32691d6a046b8ca24ee8b5bfa674bd82",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-603031",
	                        "a97e7cf8ec13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-603031 -n addons-603031
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-603031 logs -n 25: (1.520016309s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p download-docker-584504                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-584504 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ --download-only -p binary-mirror-598936 --alsologtostderr --binary-mirror http://127.0.0.1:40449 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-598936   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ -p binary-mirror-598936                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-598936   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ disable dashboard -p addons-603031                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p addons-603031                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ start   │ -p addons-603031 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:12 UTC │
	│ addons  │ addons-603031 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ addons  │ addons-603031 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-603031 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ addons  │ addons-603031 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ addons  │ addons-603031 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ ip      │ addons-603031 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ addons  │ addons-603031 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-603031 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-603031 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ ssh     │ addons-603031 ssh cat /opt/local-path-provisioner/pvc-2335e9a8-fead-435f-8d4b-708dc5b5c2fe_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │ 12 Dec 25 20:13 UTC │
	│ addons  │ addons-603031 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-603031 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ ssh     │ addons-603031 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-603031 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-603031 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:13 UTC │                     │
	│ addons  │ addons-603031 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:14 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-603031                                                                                                                                                                                                                                                                                                                                                                                           │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:14 UTC │ 12 Dec 25 20:14 UTC │
	│ addons  │ addons-603031 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:14 UTC │                     │
	│ ip      │ addons-603031 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:15 UTC │ 12 Dec 25 20:15 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:09:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:09:58.990623  365855 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:09:58.990906  365855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:58.990937  365855 out.go:374] Setting ErrFile to fd 2...
	I1212 20:09:58.990956  365855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:58.991246  365855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:09:58.991795  365855 out.go:368] Setting JSON to false
	I1212 20:09:58.992707  365855 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10351,"bootTime":1765559848,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:09:58.992805  365855 start.go:143] virtualization:  
	I1212 20:09:58.996671  365855 out.go:179] * [addons-603031] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:09:59.000456  365855 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:09:59.000838  365855 notify.go:221] Checking for updates...
	I1212 20:09:59.007141  365855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:59.010314  365855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:09:59.013485  365855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:09:59.016755  365855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:09:59.019861  365855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:09:59.023048  365855 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:09:59.058796  365855 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:09:59.058968  365855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:59.114107  365855 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:59.104877509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:59.114214  365855 docker.go:319] overlay module found
	I1212 20:09:59.117447  365855 out.go:179] * Using the docker driver based on user configuration
	I1212 20:09:59.120243  365855 start.go:309] selected driver: docker
	I1212 20:09:59.120259  365855 start.go:927] validating driver "docker" against <nil>
	I1212 20:09:59.120272  365855 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:09:59.121068  365855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:59.174868  365855 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:59.165444327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:59.175024  365855 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:09:59.175239  365855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:09:59.178157  365855 out.go:179] * Using Docker driver with root privileges
	I1212 20:09:59.181050  365855 cni.go:84] Creating CNI manager for ""
	I1212 20:09:59.181123  365855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:09:59.181140  365855 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:09:59.181232  365855 start.go:353] cluster config:
	{Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1212 20:09:59.184361  365855 out.go:179] * Starting "addons-603031" primary control-plane node in "addons-603031" cluster
	I1212 20:09:59.187159  365855 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:09:59.190185  365855 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:09:59.193136  365855 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:09:59.193196  365855 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 20:09:59.193209  365855 cache.go:65] Caching tarball of preloaded images
	I1212 20:09:59.193235  365855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:09:59.193302  365855 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:09:59.193313  365855 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:09:59.193660  365855 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/config.json ...
	I1212 20:09:59.193691  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/config.json: {Name:mk36eaea1020099c8427d6188db2385f2d523dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:59.209537  365855 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 20:09:59.209685  365855 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 20:09:59.209708  365855 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 20:09:59.209713  365855 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 20:09:59.209720  365855 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 20:09:59.209729  365855 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from local cache
	I1212 20:10:17.943401  365855 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from cached tarball
	I1212 20:10:17.943446  365855 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:10:17.943500  365855 start.go:360] acquireMachinesLock for addons-603031: {Name:mkf4d918b051b7cae7b1771e0ec6d6c76a294488 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:10:17.943633  365855 start.go:364] duration metric: took 108.391µs to acquireMachinesLock for "addons-603031"
	I1212 20:10:17.943664  365855 start.go:93] Provisioning new machine with config: &{Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:17.943743  365855 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:10:17.947161  365855 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 20:10:17.947514  365855 start.go:159] libmachine.API.Create for "addons-603031" (driver="docker")
	I1212 20:10:17.947564  365855 client.go:173] LocalClient.Create starting
	I1212 20:10:17.947712  365855 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem
	I1212 20:10:18.703311  365855 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem
	I1212 20:10:19.006033  365855 cli_runner.go:164] Run: docker network inspect addons-603031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:19.022308  365855 cli_runner.go:211] docker network inspect addons-603031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:19.022403  365855 network_create.go:284] running [docker network inspect addons-603031] to gather additional debugging logs...
	I1212 20:10:19.022448  365855 cli_runner.go:164] Run: docker network inspect addons-603031
	W1212 20:10:19.039090  365855 cli_runner.go:211] docker network inspect addons-603031 returned with exit code 1
	I1212 20:10:19.039131  365855 network_create.go:287] error running [docker network inspect addons-603031]: docker network inspect addons-603031: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-603031 not found
	I1212 20:10:19.039147  365855 network_create.go:289] output of [docker network inspect addons-603031]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-603031 not found
	
	** /stderr **
	I1212 20:10:19.039246  365855 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:19.060273  365855 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bd93c0}
	I1212 20:10:19.060320  365855 network_create.go:124] attempt to create docker network addons-603031 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 20:10:19.060410  365855 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-603031 addons-603031
	I1212 20:10:19.122318  365855 network_create.go:108] docker network addons-603031 192.168.49.0/24 created
	I1212 20:10:19.122355  365855 kic.go:121] calculated static IP "192.168.49.2" for the "addons-603031" container
	I1212 20:10:19.122455  365855 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:19.137779  365855 cli_runner.go:164] Run: docker volume create addons-603031 --label name.minikube.sigs.k8s.io=addons-603031 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:19.155138  365855 oci.go:103] Successfully created a docker volume addons-603031
	I1212 20:10:19.155234  365855 cli_runner.go:164] Run: docker run --rm --name addons-603031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-603031 --entrypoint /usr/bin/test -v addons-603031:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:21.411656  365855 cli_runner.go:217] Completed: docker run --rm --name addons-603031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-603031 --entrypoint /usr/bin/test -v addons-603031:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (2.256382323s)
	I1212 20:10:21.411692  365855 oci.go:107] Successfully prepared a docker volume addons-603031
	I1212 20:10:21.411737  365855 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:21.411755  365855 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:21.411826  365855 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-603031:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:10:25.432634  365855 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-603031:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.020761329s)
	I1212 20:10:25.432670  365855 kic.go:203] duration metric: took 4.020910336s to extract preloaded images to volume ...
	W1212 20:10:25.432830  365855 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 20:10:25.432951  365855 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:10:25.485721  365855 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-603031 --name addons-603031 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-603031 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-603031 --network addons-603031 --ip 192.168.49.2 --volume addons-603031:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:10:25.794767  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Running}}
	I1212 20:10:25.817350  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:25.840511  365855 cli_runner.go:164] Run: docker exec addons-603031 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:10:25.896178  365855 oci.go:144] the created container "addons-603031" has a running status.
	I1212 20:10:25.896211  365855 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa...
	I1212 20:10:26.437903  365855 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:10:26.457419  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:26.475522  365855 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:10:26.475547  365855 kic_runner.go:114] Args: [docker exec --privileged addons-603031 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:10:26.515657  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:26.533444  365855 machine.go:94] provisionDockerMachine start ...
	I1212 20:10:26.533556  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:26.550679  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:26.551022  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:26.551038  365855 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:10:26.551642  365855 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54096->127.0.0.1:33147: read: connection reset by peer
	I1212 20:10:29.704101  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-603031
	
	I1212 20:10:29.704127  365855 ubuntu.go:182] provisioning hostname "addons-603031"
	I1212 20:10:29.704200  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:29.722739  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:29.723052  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:29.723068  365855 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-603031 && echo "addons-603031" | sudo tee /etc/hostname
	I1212 20:10:29.882329  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-603031
	
	I1212 20:10:29.882406  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:29.901687  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:29.902021  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:29.902042  365855 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-603031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-603031/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-603031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:10:30.100250  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:10:30.100278  365855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:10:30.100304  365855 ubuntu.go:190] setting up certificates
	I1212 20:10:30.100328  365855 provision.go:84] configureAuth start
	I1212 20:10:30.100424  365855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-603031
	I1212 20:10:30.128420  365855 provision.go:143] copyHostCerts
	I1212 20:10:30.128524  365855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:10:30.128674  365855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:10:30.128738  365855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:10:30.128829  365855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.addons-603031 san=[127.0.0.1 192.168.49.2 addons-603031 localhost minikube]
	I1212 20:10:30.725505  365855 provision.go:177] copyRemoteCerts
	I1212 20:10:30.725572  365855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:10:30.725615  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:30.742595  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:30.847974  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:10:30.865319  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:10:30.883766  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:10:30.902419  365855 provision.go:87] duration metric: took 802.065687ms to configureAuth
	I1212 20:10:30.902452  365855 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:10:30.902653  365855 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:30.902763  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:30.921383  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:30.921700  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:30.921719  365855 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:10:31.248237  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:10:31.248303  365855 machine.go:97] duration metric: took 4.714835649s to provisionDockerMachine
	I1212 20:10:31.248321  365855 client.go:176] duration metric: took 13.300747585s to LocalClient.Create
	I1212 20:10:31.248341  365855 start.go:167] duration metric: took 13.300828932s to libmachine.API.Create "addons-603031"
	I1212 20:10:31.248354  365855 start.go:293] postStartSetup for "addons-603031" (driver="docker")
	I1212 20:10:31.248390  365855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:10:31.248460  365855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:10:31.248523  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.267907  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.372692  365855 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:10:31.375990  365855 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:10:31.376064  365855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:10:31.376085  365855 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:10:31.376165  365855 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:10:31.376192  365855 start.go:296] duration metric: took 127.831819ms for postStartSetup
	I1212 20:10:31.376542  365855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-603031
	I1212 20:10:31.393877  365855 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/config.json ...
	I1212 20:10:31.394174  365855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:10:31.394225  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.410976  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.513782  365855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:10:31.518935  365855 start.go:128] duration metric: took 13.575174059s to createHost
	I1212 20:10:31.518973  365855 start.go:83] releasing machines lock for "addons-603031", held for 13.575316937s
	I1212 20:10:31.519057  365855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-603031
	I1212 20:10:31.536638  365855 ssh_runner.go:195] Run: cat /version.json
	I1212 20:10:31.536702  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.536956  365855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:10:31.537022  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.556141  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.565731  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.659948  365855 ssh_runner.go:195] Run: systemctl --version
	I1212 20:10:31.764213  365855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:10:31.799054  365855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:10:31.803493  365855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:10:31.803573  365855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:10:31.832962  365855 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 20:10:31.833027  365855 start.go:496] detecting cgroup driver to use...
	I1212 20:10:31.833068  365855 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:10:31.833126  365855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:10:31.851056  365855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:10:31.864808  365855 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:10:31.864912  365855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:10:31.881912  365855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:10:31.900511  365855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:10:32.019285  365855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:10:32.143380  365855 docker.go:234] disabling docker service ...
	I1212 20:10:32.143512  365855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:10:32.165229  365855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:10:32.178573  365855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:10:32.292077  365855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:10:32.413572  365855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:10:32.426872  365855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:10:32.442316  365855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:10:32.442407  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.452177  365855 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:10:32.452290  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.463077  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.473062  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.483000  365855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:10:32.492774  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.502544  365855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.517663  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.526573  365855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:10:32.534411  365855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:10:32.542257  365855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:32.657040  365855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:10:32.837669  365855 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:10:32.837770  365855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:10:32.841638  365855 start.go:564] Will wait 60s for crictl version
	I1212 20:10:32.841706  365855 ssh_runner.go:195] Run: which crictl
	I1212 20:10:32.845133  365855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:10:32.868395  365855 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:10:32.868558  365855 ssh_runner.go:195] Run: crio --version
	I1212 20:10:32.899162  365855 ssh_runner.go:195] Run: crio --version
	I1212 20:10:32.930305  365855 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:10:32.932961  365855 cli_runner.go:164] Run: docker network inspect addons-603031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:32.949236  365855 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:10:32.953336  365855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:32.963678  365855 kubeadm.go:884] updating cluster {Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:10:32.963801  365855 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:32.963866  365855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:32.998573  365855 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:32.998597  365855 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:10:32.998658  365855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:33.042268  365855 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:33.042294  365855 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:10:33.042302  365855 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 20:10:33.042391  365855 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-603031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:10:33.042477  365855 ssh_runner.go:195] Run: crio config
	I1212 20:10:33.111907  365855 cni.go:84] Creating CNI manager for ""
	I1212 20:10:33.111932  365855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:33.111951  365855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:10:33.111974  365855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-603031 NodeName:addons-603031 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:10:33.112112  365855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-603031"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:10:33.112191  365855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:10:33.120501  365855 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:10:33.120620  365855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:10:33.128779  365855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:10:33.142833  365855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:10:33.155922  365855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1212 20:10:33.168184  365855 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:10:33.171904  365855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:33.181663  365855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:33.288898  365855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:33.305957  365855 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031 for IP: 192.168.49.2
	I1212 20:10:33.305982  365855 certs.go:195] generating shared ca certs ...
	I1212 20:10:33.306013  365855 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.306144  365855 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:10:33.732565  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt ...
	I1212 20:10:33.732600  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt: {Name:mk136a4872d4735b1a51b53120b75a5ccade3b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.732798  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key ...
	I1212 20:10:33.732812  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key: {Name:mkd182407294285cd09f957d2c29d8a2f449bcba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.732903  365855 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:10:33.856698  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt ...
	I1212 20:10:33.856725  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt: {Name:mkacf2c7f9ae40d6aaec7f7a170dec87e851d722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.856891  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key ...
	I1212 20:10:33.856905  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key: {Name:mkb2be966cf482840e728784dfb858a82dbe8b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.856984  365855 certs.go:257] generating profile certs ...
	I1212 20:10:33.857042  365855 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.key
	I1212 20:10:33.857061  365855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt with IP's: []
	I1212 20:10:34.089489  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt ...
	I1212 20:10:34.089526  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: {Name:mk837e5f2ccbdfb557804fd902094182abc3757a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.089721  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.key ...
	I1212 20:10:34.089735  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.key: {Name:mk53aa0be088e657c02e69186bdee9e510afb09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.089827  365855 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408
	I1212 20:10:34.089847  365855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 20:10:34.689182  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408 ...
	I1212 20:10:34.689216  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408: {Name:mkdef7807c8cc1f6201a5888891951c2c01bf017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.689401  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408 ...
	I1212 20:10:34.689419  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408: {Name:mkebe7d9b8692e69282594ba9f0372c88639708d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.689493  365855 certs.go:382] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt
	I1212 20:10:34.689579  365855 certs.go:386] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key
	I1212 20:10:34.689634  365855 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key
	I1212 20:10:34.689655  365855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt with IP's: []
	I1212 20:10:34.915927  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt ...
	I1212 20:10:34.915959  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt: {Name:mkf6d1d069059ae3210ccaa8b5c6e4f517bd9d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.916145  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key ...
	I1212 20:10:34.916159  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key: {Name:mk123dc9296ff8e8688845e2505214d0152caaf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.916347  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:10:34.916418  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:10:34.916452  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:10:34.916484  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:10:34.917043  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:10:34.936782  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:10:34.959031  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:10:34.978963  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:10:34.998783  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 20:10:35.027942  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:10:35.049790  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:10:35.070964  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:10:35.093715  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:10:35.114671  365855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:10:35.130080  365855 ssh_runner.go:195] Run: openssl version
	I1212 20:10:35.136995  365855 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.145394  365855 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:10:35.154673  365855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.159449  365855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.159523  365855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.202467  365855 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:10:35.211157  365855 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:10:35.219756  365855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:10:35.223704  365855 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:10:35.223777  365855 kubeadm.go:401] StartCluster: {Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:35.223889  365855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:10:35.223956  365855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:10:35.253734  365855 cri.go:89] found id: ""
	I1212 20:10:35.253876  365855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:10:35.262834  365855 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:35.271671  365855 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:35.271748  365855 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:35.280801  365855 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:35.280825  365855 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:35.280891  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:10:35.289757  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:35.289835  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:35.298264  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:10:35.306851  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:35.306930  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:35.315091  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:10:35.323637  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:35.323708  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:35.331554  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:10:35.339440  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:35.339512  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:35.347505  365855 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:35.417395  365855 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1212 20:10:35.417718  365855 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:10:35.485500  365855 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:10:53.066812  365855 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:10:53.066874  365855 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:10:53.066963  365855 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:10:53.067019  365855 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:10:53.067057  365855 kubeadm.go:319] OS: Linux
	I1212 20:10:53.067103  365855 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:10:53.067153  365855 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:10:53.067200  365855 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:10:53.067249  365855 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:10:53.067296  365855 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:10:53.067349  365855 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:10:53.067396  365855 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:10:53.067459  365855 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:10:53.067508  365855 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:10:53.067581  365855 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:10:53.067672  365855 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:10:53.067758  365855 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:10:53.067819  365855 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:10:53.071037  365855 out.go:252]   - Generating certificates and keys ...
	I1212 20:10:53.071142  365855 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:10:53.071216  365855 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:10:53.071289  365855 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:10:53.071350  365855 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:10:53.071415  365855 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:10:53.071476  365855 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:10:53.071534  365855 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:10:53.071654  365855 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-603031 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 20:10:53.071711  365855 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:10:53.071840  365855 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-603031 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 20:10:53.071920  365855 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:10:53.071988  365855 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:10:53.072036  365855 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:10:53.072095  365855 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:10:53.072170  365855 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:10:53.072231  365855 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:10:53.072290  365855 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:10:53.072357  365855 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:10:53.072441  365855 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:10:53.072529  365855 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:10:53.072666  365855 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:10:53.075686  365855 out.go:252]   - Booting up control plane ...
	I1212 20:10:53.075847  365855 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:10:53.075934  365855 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:10:53.076006  365855 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:10:53.076128  365855 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:10:53.076229  365855 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:10:53.076341  365855 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:10:53.076469  365855 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:10:53.076512  365855 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:10:53.076651  365855 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:10:53.076762  365855 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:10:53.076825  365855 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004714464s
	I1212 20:10:53.076922  365855 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:10:53.077007  365855 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1212 20:10:53.077102  365855 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:10:53.077188  365855 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:10:53.077269  365855 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.715128518s
	I1212 20:10:53.077343  365855 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.682716309s
	I1212 20:10:53.077420  365855 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501565858s
	I1212 20:10:53.077541  365855 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:10:53.077676  365855 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:10:53.077738  365855 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:10:53.077955  365855 kubeadm.go:319] [mark-control-plane] Marking the node addons-603031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:10:53.078015  365855 kubeadm.go:319] [bootstrap-token] Using token: nbgdzp.csbyudvbvi3h3xct
	I1212 20:10:53.081077  365855 out.go:252]   - Configuring RBAC rules ...
	I1212 20:10:53.081222  365855 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:10:53.081315  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:10:53.081522  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:10:53.081706  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:10:53.081836  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:10:53.081929  365855 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:10:53.082045  365855 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:10:53.082093  365855 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:10:53.082142  365855 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:10:53.082153  365855 kubeadm.go:319] 
	I1212 20:10:53.082210  365855 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:10:53.082218  365855 kubeadm.go:319] 
	I1212 20:10:53.082290  365855 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:10:53.082297  365855 kubeadm.go:319] 
	I1212 20:10:53.082321  365855 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:10:53.082380  365855 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:10:53.082431  365855 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:10:53.082440  365855 kubeadm.go:319] 
	I1212 20:10:53.082491  365855 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:10:53.082498  365855 kubeadm.go:319] 
	I1212 20:10:53.082543  365855 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:10:53.082550  365855 kubeadm.go:319] 
	I1212 20:10:53.082599  365855 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:10:53.082686  365855 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:10:53.082761  365855 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:10:53.082772  365855 kubeadm.go:319] 
	I1212 20:10:53.082866  365855 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:10:53.082960  365855 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:10:53.082969  365855 kubeadm.go:319] 
	I1212 20:10:53.083051  365855 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nbgdzp.csbyudvbvi3h3xct \
	I1212 20:10:53.083167  365855 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:adaa875fcacb3059f0eec2e4962c24570f977d0d03cb0131f0cb68ee03e4f578 \
	I1212 20:10:53.083195  365855 kubeadm.go:319] 	--control-plane 
	I1212 20:10:53.083203  365855 kubeadm.go:319] 
	I1212 20:10:53.083294  365855 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:10:53.083301  365855 kubeadm.go:319] 
	I1212 20:10:53.083384  365855 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nbgdzp.csbyudvbvi3h3xct \
	I1212 20:10:53.083512  365855 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:adaa875fcacb3059f0eec2e4962c24570f977d0d03cb0131f0cb68ee03e4f578 
	I1212 20:10:53.083532  365855 cni.go:84] Creating CNI manager for ""
	I1212 20:10:53.083560  365855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:53.086856  365855 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:10:53.089890  365855 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:10:53.094652  365855 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:10:53.094677  365855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:10:53.111432  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:10:53.434925  365855 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:10:53.435048  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:53.435128  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-603031 minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=addons-603031 minikube.k8s.io/primary=true
	I1212 20:10:53.457994  365855 ops.go:34] apiserver oom_adj: -16
	I1212 20:10:53.649083  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.149314  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.649623  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.149315  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.649454  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.149414  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.649563  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.738242  365855 kubeadm.go:1114] duration metric: took 3.303234942s to wait for elevateKubeSystemPrivileges
	I1212 20:10:56.738289  365855 kubeadm.go:403] duration metric: took 21.514533948s to StartCluster
	I1212 20:10:56.738308  365855 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:56.738459  365855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:10:56.738921  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:56.739122  365855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:10:56.739190  365855 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:56.739400  365855 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:56.739450  365855 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1212 20:10:56.739538  365855 addons.go:70] Setting yakd=true in profile "addons-603031"
	I1212 20:10:56.739552  365855 addons.go:239] Setting addon yakd=true in "addons-603031"
	I1212 20:10:56.739578  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.740063  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.740550  365855 addons.go:70] Setting metrics-server=true in profile "addons-603031"
	I1212 20:10:56.740577  365855 addons.go:239] Setting addon metrics-server=true in "addons-603031"
	I1212 20:10:56.740602  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.740675  365855 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-603031"
	I1212 20:10:56.740725  365855 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-603031"
	I1212 20:10:56.740777  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.741022  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.741368  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.744704  365855 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-603031"
	I1212 20:10:56.744742  365855 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-603031"
	I1212 20:10:56.744776  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.745235  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.747386  365855 addons.go:70] Setting registry=true in profile "addons-603031"
	I1212 20:10:56.747485  365855 addons.go:239] Setting addon registry=true in "addons-603031"
	I1212 20:10:56.747552  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.748197  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.749970  365855 addons.go:70] Setting cloud-spanner=true in profile "addons-603031"
	I1212 20:10:56.750004  365855 addons.go:239] Setting addon cloud-spanner=true in "addons-603031"
	I1212 20:10:56.750054  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.750622  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.761748  365855 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-603031"
	I1212 20:10:56.761817  365855 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-603031"
	I1212 20:10:56.761847  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.762320  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.765518  365855 addons.go:70] Setting registry-creds=true in profile "addons-603031"
	I1212 20:10:56.765613  365855 addons.go:239] Setting addon registry-creds=true in "addons-603031"
	I1212 20:10:56.765678  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.766192  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.779498  365855 addons.go:70] Setting default-storageclass=true in profile "addons-603031"
	I1212 20:10:56.779534  365855 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-603031"
	I1212 20:10:56.780490  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.780680  365855 addons.go:70] Setting storage-provisioner=true in profile "addons-603031"
	I1212 20:10:56.780710  365855 addons.go:239] Setting addon storage-provisioner=true in "addons-603031"
	I1212 20:10:56.780769  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.782781  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.788192  365855 addons.go:70] Setting gcp-auth=true in profile "addons-603031"
	I1212 20:10:56.788277  365855 mustload.go:66] Loading cluster: addons-603031
	I1212 20:10:56.788985  365855 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:56.789494  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.794098  365855 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-603031"
	I1212 20:10:56.794185  365855 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-603031"
	I1212 20:10:56.794561  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.815570  365855 addons.go:70] Setting ingress=true in profile "addons-603031"
	I1212 20:10:56.815608  365855 addons.go:239] Setting addon ingress=true in "addons-603031"
	I1212 20:10:56.815663  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.816147  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.818758  365855 addons.go:70] Setting volcano=true in profile "addons-603031"
	I1212 20:10:56.818791  365855 addons.go:239] Setting addon volcano=true in "addons-603031"
	I1212 20:10:56.818875  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.819640  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.828424  365855 addons.go:70] Setting ingress-dns=true in profile "addons-603031"
	I1212 20:10:56.828458  365855 addons.go:239] Setting addon ingress-dns=true in "addons-603031"
	I1212 20:10:56.828504  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.828977  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.837153  365855 addons.go:70] Setting volumesnapshots=true in profile "addons-603031"
	I1212 20:10:56.837197  365855 addons.go:239] Setting addon volumesnapshots=true in "addons-603031"
	I1212 20:10:56.837235  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.837733  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.839815  365855 addons.go:70] Setting inspektor-gadget=true in profile "addons-603031"
	I1212 20:10:56.839880  365855 addons.go:239] Setting addon inspektor-gadget=true in "addons-603031"
	I1212 20:10:56.839922  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.849775  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.885057  365855 out.go:179] * Verifying Kubernetes components...
	I1212 20:10:56.902244  365855 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1212 20:10:56.905162  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1212 20:10:56.905189  365855 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1212 20:10:56.905258  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.907361  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.922496  365855 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1212 20:10:56.922895  365855 addons.go:239] Setting addon default-storageclass=true in "addons-603031"
	I1212 20:10:56.922925  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.923439  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.928810  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 20:10:56.928849  365855 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 20:10:56.928914  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.940574  365855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:56.945143  365855 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1212 20:10:56.945210  365855 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1212 20:10:56.946572  365855 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1212 20:10:56.956333  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 20:10:56.958360  365855 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 20:10:56.958385  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 20:10:56.958456  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.946920  365855 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1212 20:10:56.948396  365855 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-603031"
	I1212 20:10:56.959241  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.959727  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.976685  365855 out.go:179]   - Using image docker.io/registry:3.0.0
	I1212 20:10:56.981626  365855 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1212 20:10:56.984557  365855 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 20:10:56.984582  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1212 20:10:56.984652  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.984869  365855 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1212 20:10:56.991047  365855 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 20:10:56.991120  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1212 20:10:56.991218  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.007940  365855 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1212 20:10:57.008019  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 20:10:57.008124  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.009488  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 20:10:57.016210  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 20:10:57.019523  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 20:10:57.020031  365855 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 20:10:57.020052  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1212 20:10:57.020116  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.028545  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 20:10:57.030370  365855 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 20:10:57.030396  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1212 20:10:57.030472  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	W1212 20:10:57.067072  365855 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1212 20:10:57.067679  365855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:10:57.090417  365855 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:57.090499  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:10:57.090594  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.092088  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 20:10:57.118407  365855 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1212 20:10:57.085204  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 20:10:57.122227  365855 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1212 20:10:57.122303  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1212 20:10:57.122405  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.124760  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 20:10:57.124839  365855 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 20:10:57.124952  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.147196  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1212 20:10:57.148161  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.161133  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 20:10:57.161907  365855 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:57.161923  365855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:10:57.161987  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.164752  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.165939  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 20:10:57.166306  365855 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 20:10:57.166323  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1212 20:10:57.166385  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.195066  365855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:10:57.195163  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 20:10:57.201446  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 20:10:57.201643  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.202220  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.204533  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 20:10:57.204555  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 20:10:57.204621  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.210623  365855 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 20:10:57.211806  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.220503  365855 out.go:179]   - Using image docker.io/busybox:stable
	I1212 20:10:57.223554  365855 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 20:10:57.223581  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 20:10:57.223650  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.239669  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.244881  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.270378  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.324536  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.330297  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.331859  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	W1212 20:10:57.343478  365855 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 20:10:57.343525  365855 retry.go:31] will retry after 274.899991ms: ssh: handshake failed: EOF
	I1212 20:10:57.351868  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.370217  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.371341  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.374424  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.406130  365855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:57.772153  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 20:10:57.772223  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 20:10:57.957620  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1212 20:10:57.957647  365855 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1212 20:10:57.960219  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 20:10:57.960244  365855 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 20:10:57.990165  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 20:10:58.018628  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 20:10:58.050127  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 20:10:58.050156  365855 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 20:10:58.075820  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 20:10:58.091009  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1212 20:10:58.091035  365855 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1212 20:10:58.102794  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 20:10:58.106703  365855 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 20:10:58.106732  365855 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 20:10:58.107069  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 20:10:58.121015  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 20:10:58.123624  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:58.129266  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 20:10:58.129294  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 20:10:58.160939  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 20:10:58.172010  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1212 20:10:58.213490  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:58.228995  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 20:10:58.243790  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 20:10:58.243813  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 20:10:58.251003  365855 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 20:10:58.251029  365855 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 20:10:58.278837  365855 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 20:10:58.278863  365855 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 20:10:58.301221  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1212 20:10:58.301246  365855 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1212 20:10:58.386264  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 20:10:58.386291  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 20:10:58.434904  365855 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 20:10:58.434965  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 20:10:58.453613  365855 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 20:10:58.453677  365855 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 20:10:58.475671  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1212 20:10:58.475747  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1212 20:10:58.549894  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 20:10:58.549968  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 20:10:58.635236  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 20:10:58.638604  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1212 20:10:58.672756  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 20:10:58.672855  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 20:10:58.674997  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 20:10:58.675073  365855 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 20:10:58.898525  365855 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.492361752s)
	I1212 20:10:58.899363  365855 node_ready.go:35] waiting up to 6m0s for node "addons-603031" to be "Ready" ...
	I1212 20:10:58.899544  365855 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.704449089s)
	I1212 20:10:58.899595  365855 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 20:10:58.907025  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 20:10:58.907047  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 20:10:58.943262  365855 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 20:10:58.943334  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 20:10:59.314811  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 20:10:59.314887  365855 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 20:10:59.394694  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 20:10:59.405349  365855 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-603031" context rescaled to 1 replicas
	I1212 20:10:59.568870  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.578665923s)
	I1212 20:10:59.649011  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 20:10:59.649080  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 20:10:59.872795  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 20:10:59.872866  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 20:11:00.193043  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 20:11:00.193143  365855 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 20:11:00.425351  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1212 20:11:00.913136  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:01.950292  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.931620877s)
	I1212 20:11:02.135635  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.032800939s)
	I1212 20:11:02.135734  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.028643215s)
	I1212 20:11:02.135818  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.014778214s)
	I1212 20:11:02.135890  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.012244401s)
	I1212 20:11:02.136250  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.060402332s)
	I1212 20:11:02.172894  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.011906173s)
	I1212 20:11:02.172930  365855 addons.go:495] Verifying addon metrics-server=true in "addons-603031"
	I1212 20:11:02.236320  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.064268754s)
	I1212 20:11:02.236393  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.022879137s)
	I1212 20:11:03.253252  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.024217684s)
	I1212 20:11:03.253286  365855 addons.go:495] Verifying addon ingress=true in "addons-603031"
	I1212 20:11:03.253493  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.618174383s)
	I1212 20:11:03.253508  365855 addons.go:495] Verifying addon registry=true in "addons-603031"
	I1212 20:11:03.253808  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.61512753s)
	I1212 20:11:03.254154  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.859389605s)
	W1212 20:11:03.254185  365855 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 20:11:03.254204  365855 retry.go:31] will retry after 278.257898ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 20:11:03.256670  365855 out.go:179] * Verifying ingress addon...
	I1212 20:11:03.258641  365855 out.go:179] * Verifying registry addon...
	I1212 20:11:03.258742  365855 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-603031 service yakd-dashboard -n yakd-dashboard
	
	I1212 20:11:03.261546  365855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 20:11:03.261556  365855 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 20:11:03.275805  365855 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 20:11:03.275826  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:03.278156  365855 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 20:11:03.278176  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 20:11:03.403996  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:03.532811  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 20:11:03.564443  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.139041147s)
	I1212 20:11:03.564486  365855 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-603031"
	I1212 20:11:03.567474  365855 out.go:179] * Verifying csi-hostpath-driver addon...
	I1212 20:11:03.571159  365855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 20:11:03.584063  365855 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 20:11:03.584085  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:03.766281  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:03.766683  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:04.075436  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:04.265378  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:04.266056  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:04.575542  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:04.637123  365855 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 20:11:04.637207  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:11:04.654197  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:11:04.766536  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:04.766879  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:04.778913  365855 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 20:11:04.792312  365855 addons.go:239] Setting addon gcp-auth=true in "addons-603031"
	I1212 20:11:04.792412  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:11:04.792882  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:11:04.810730  365855 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 20:11:04.810810  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:11:04.828440  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:11:05.078709  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:05.265150  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:05.265395  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:05.574496  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:05.764685  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:05.765091  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 20:11:05.905187  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:06.083032  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:06.267622  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:06.268131  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:06.276442  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.743576352s)
	I1212 20:11:06.276516  365855 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.465704238s)
	I1212 20:11:06.279953  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 20:11:06.282864  365855 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1212 20:11:06.285797  365855 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 20:11:06.285830  365855 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 20:11:06.299425  365855 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 20:11:06.299469  365855 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 20:11:06.315308  365855 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 20:11:06.315383  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1212 20:11:06.329902  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 20:11:06.575176  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:06.767858  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:06.769632  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:06.848046  365855 addons.go:495] Verifying addon gcp-auth=true in "addons-603031"
	I1212 20:11:06.853005  365855 out.go:179] * Verifying gcp-auth addon...
	I1212 20:11:06.855728  365855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 20:11:06.866842  365855 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 20:11:06.866908  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:07.077741  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:07.265064  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:07.265362  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:07.359314  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:07.574475  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:07.765935  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:07.766304  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:07.859118  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:08.078903  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:08.264899  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:08.265607  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:08.359317  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:08.403290  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:08.574675  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:08.765444  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:08.765659  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:08.859897  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:09.075561  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:09.266525  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:09.266639  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:09.359868  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:09.574566  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:09.765056  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:09.765259  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:09.859326  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:10.076152  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:10.272191  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:10.272978  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:10.358817  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:10.575297  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:10.765668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:10.765895  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:10.859045  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:10.903279  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:11.079219  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:11.265600  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:11.265781  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:11.358776  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:11.575887  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:11.765476  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:11.765681  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:11.858647  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:12.078611  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:12.264887  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:12.265517  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:12.359512  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:12.575311  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:12.765983  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:12.766268  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:12.859469  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:13.079962  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:13.266430  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:13.266621  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:13.360481  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:13.402479  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:13.574352  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:13.766420  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:13.766892  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:13.858989  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:14.079836  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:14.265215  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:14.265588  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:14.359291  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:14.574614  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:14.764785  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:14.765099  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:14.858948  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:15.078542  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:15.264943  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:15.265200  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:15.358906  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:15.403150  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:15.574221  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:15.765752  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:15.766260  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:15.859160  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:16.078853  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:16.265175  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:16.265314  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:16.359095  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:16.575204  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:16.765494  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:16.765813  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:16.858940  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:17.078389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:17.265635  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:17.266038  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:17.358811  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:17.574756  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:17.765515  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:17.765703  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:17.859646  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:17.902233  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:18.077918  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:18.265886  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:18.266307  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:18.358908  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:18.575134  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:18.765352  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:18.765528  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:18.859205  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:19.077975  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:19.266029  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:19.266184  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:19.359074  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:19.574959  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:19.765821  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:19.765959  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:19.858782  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:19.902560  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:20.078461  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:20.265943  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:20.266123  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:20.359074  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:20.576963  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:20.764921  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:20.765883  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:20.858676  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:21.077999  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:21.265704  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:21.265847  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:21.358846  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:21.574933  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:21.765217  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:21.765377  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:21.859482  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:22.080138  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:22.265427  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:22.265656  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:22.359510  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:22.402476  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:22.574879  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:22.765481  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:22.765776  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:22.859464  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:23.078422  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:23.266534  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:23.266731  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:23.359491  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:23.574071  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:23.765668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:23.765867  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:23.858871  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:24.078320  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:24.264959  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:24.265401  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:24.359394  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:24.402772  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:24.575165  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:24.765695  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:24.765820  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:24.862932  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:25.079437  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:25.268945  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:25.269223  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:25.358903  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:25.577351  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:25.764719  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:25.765248  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:25.859076  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:26.077580  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:26.265330  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:26.265768  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:26.359576  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:26.574961  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:26.765492  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:26.765632  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:26.859302  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:26.902948  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:27.077984  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:27.265497  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:27.265649  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:27.359602  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:27.574572  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:27.765005  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:27.765516  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:27.859395  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:28.078024  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:28.265346  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:28.265649  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:28.359191  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:28.573958  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:28.765213  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:28.765432  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:28.859531  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:29.077637  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:29.264558  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:29.264921  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:29.358521  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:29.402030  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:29.573884  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:29.765595  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:29.766199  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:29.859208  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:30.079327  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:30.265417  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:30.265733  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:30.360511  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:30.574054  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:30.765399  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:30.765758  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:30.859642  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:31.076104  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:31.265682  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:31.265897  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:31.358828  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:31.402986  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:31.575550  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:31.765488  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:31.765815  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:31.858566  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:32.077296  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:32.265616  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:32.265834  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:32.358830  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:32.574306  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:32.765407  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:32.765828  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:32.858348  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:33.077262  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:33.265570  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:33.265711  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:33.358955  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:33.574420  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:33.764729  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:33.764916  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:33.859191  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:33.902837  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:34.078469  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:34.264737  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:34.264931  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:34.358890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:34.574587  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:34.764890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:34.765131  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:34.858932  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:35.079104  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:35.266075  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:35.266146  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:35.359039  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:35.574836  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:35.764967  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:35.765602  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:35.859333  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:35.903007  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:36.078237  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:36.265815  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:36.265888  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:36.358778  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:36.574351  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:36.764536  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:36.764922  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:36.858513  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:37.077783  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:37.265079  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:37.265272  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:37.359058  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:37.574706  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:37.764848  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:37.764879  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:37.859506  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:38.078384  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:38.265652  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:38.265805  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:38.358703  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:38.402569  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:38.575172  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:38.765400  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:38.765658  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:38.859558  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:39.077881  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:39.265090  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:39.265201  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:39.358890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:39.574777  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:39.765390  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:39.765506  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:39.870872  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:39.949667  365855 node_ready.go:49] node "addons-603031" is "Ready"
	I1212 20:11:39.949698  365855 node_ready.go:38] duration metric: took 41.050250733s for node "addons-603031" to be "Ready" ...
	I1212 20:11:39.949714  365855 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:39.949771  365855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:39.975522  365855 api_server.go:72] duration metric: took 43.23628628s to wait for apiserver process to appear ...
	I1212 20:11:39.975552  365855 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:39.975574  365855 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 20:11:40.034712  365855 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 20:11:40.038026  365855 api_server.go:141] control plane version: v1.34.2
	I1212 20:11:40.038061  365855 api_server.go:131] duration metric: took 62.500544ms to wait for apiserver health ...
	I1212 20:11:40.038073  365855 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:40.052006  365855 system_pods.go:59] 19 kube-system pods found
	I1212 20:11:40.052045  365855 system_pods.go:61] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending
	I1212 20:11:40.052052  365855 system_pods.go:61] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.052057  365855 system_pods.go:61] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending
	I1212 20:11:40.052062  365855 system_pods.go:61] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending
	I1212 20:11:40.052066  365855 system_pods.go:61] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.052070  365855 system_pods.go:61] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.052074  365855 system_pods.go:61] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.052078  365855 system_pods.go:61] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.052082  365855 system_pods.go:61] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending
	I1212 20:11:40.052085  365855 system_pods.go:61] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.052089  365855 system_pods.go:61] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.052094  365855 system_pods.go:61] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending
	I1212 20:11:40.052104  365855 system_pods.go:61] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.052108  365855 system_pods.go:61] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending
	I1212 20:11:40.052119  365855 system_pods.go:61] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.052124  365855 system_pods.go:61] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending
	I1212 20:11:40.052132  365855 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending
	I1212 20:11:40.052138  365855 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending
	I1212 20:11:40.052141  365855 system_pods.go:61] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending
	I1212 20:11:40.052149  365855 system_pods.go:74] duration metric: took 14.070015ms to wait for pod list to return data ...
	I1212 20:11:40.052160  365855 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:40.059155  365855 default_sa.go:45] found service account: "default"
	I1212 20:11:40.059233  365855 default_sa.go:55] duration metric: took 7.065802ms for default service account to be created ...
	I1212 20:11:40.059262  365855 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:11:40.133016  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:40.133099  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending
	I1212 20:11:40.133122  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.133143  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending
	I1212 20:11:40.133182  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending
	I1212 20:11:40.133208  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.133229  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.133249  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.133271  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.133304  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending
	I1212 20:11:40.133325  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.133344  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.133366  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending
	I1212 20:11:40.133401  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.133421  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending
	I1212 20:11:40.133443  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.133474  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending
	I1212 20:11:40.133497  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending
	I1212 20:11:40.133516  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending
	I1212 20:11:40.133536  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:40.133581  365855 retry.go:31] will retry after 262.633772ms: missing components: kube-dns
	I1212 20:11:40.133848  365855 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 20:11:40.133888  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:40.305112  365855 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 20:11:40.305372  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:40.305351  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:40.363548  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:40.414743  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:40.414823  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:40.414845  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.414867  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending
	I1212 20:11:40.414901  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending
	I1212 20:11:40.414927  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.414948  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.414968  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.414988  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.415023  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 20:11:40.415041  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.415059  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.415081  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 20:11:40.415114  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.415131  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending
	I1212 20:11:40.415151  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.415170  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending
	I1212 20:11:40.415201  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending
	I1212 20:11:40.415228  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending
	I1212 20:11:40.415249  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:40.415281  365855 retry.go:31] will retry after 351.351313ms: missing components: kube-dns
	I1212 20:11:40.576008  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:40.832133  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:40.851789  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:40.867794  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:40.867839  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:40.867847  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.867855  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 20:11:40.867862  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 20:11:40.867866  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.867871  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.867876  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.867880  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.867891  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 20:11:40.867896  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.867916  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.867922  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 20:11:40.867933  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.867940  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 20:11:40.867946  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.867956  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 20:11:40.867964  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:40.867971  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:40.867979  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:40.868001  365855 retry.go:31] will retry after 475.387205ms: missing components: kube-dns
	I1212 20:11:40.898935  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:41.097763  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:41.266783  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:41.266897  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:41.368912  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:41.369492  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:41.369519  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:41.369550  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 20:11:41.369568  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 20:11:41.369575  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 20:11:41.369584  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:41.369588  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:41.369593  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:41.369597  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:41.369609  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 20:11:41.369613  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:41.369624  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:41.369633  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 20:11:41.369646  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 20:11:41.369667  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 20:11:41.369674  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:41.369685  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 20:11:41.369698  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:41.369710  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:41.369714  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Running
	I1212 20:11:41.369723  365855 system_pods.go:126] duration metric: took 1.310440423s to wait for k8s-apps to be running ...
	I1212 20:11:41.369735  365855 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:11:41.369800  365855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:41.390264  365855 system_svc.go:56] duration metric: took 20.519227ms WaitForService to wait for kubelet
	I1212 20:11:41.390307  365855 kubeadm.go:587] duration metric: took 44.651086478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:11:41.390327  365855 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:11:41.393227  365855 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 20:11:41.393268  365855 node_conditions.go:123] node cpu capacity is 2
	I1212 20:11:41.393282  365855 node_conditions.go:105] duration metric: took 2.950236ms to run NodePressure ...
	I1212 20:11:41.393296  365855 start.go:242] waiting for startup goroutines ...
	I1212 20:11:41.575077  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:41.766632  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:41.766735  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:41.873715  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:42.081740  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:42.265941  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:42.267264  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:42.359802  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:42.575417  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:42.766577  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:42.767288  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:42.859480  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:43.083033  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:43.273425  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:43.273814  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:43.360476  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:43.575724  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:43.769435  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:43.769885  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:43.859599  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:44.089337  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:44.267831  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:44.274777  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:44.359765  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:44.576052  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:44.767803  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:44.768057  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:44.865938  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:45.081372  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:45.273656  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:45.278125  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:45.362070  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:45.577850  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:45.772076  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:45.772247  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:45.868884  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:46.079384  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:46.265710  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:46.266688  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:46.360051  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:46.576006  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:46.766818  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:46.767168  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:46.859415  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:47.081880  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:47.265294  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:47.265634  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:47.358798  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:47.574865  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:47.766625  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:47.767003  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:47.867414  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:48.080700  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:48.265406  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:48.267014  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:48.359872  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:48.575396  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:48.766507  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:48.766883  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:48.859046  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:49.082411  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:49.266900  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:49.267670  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:49.358481  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:49.574525  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:49.766376  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:49.766556  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:49.864421  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:50.082025  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:50.265288  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:50.265426  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:50.359229  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:50.574941  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:50.766295  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:50.766528  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:50.859472  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:51.079415  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:51.264854  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:51.265341  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:51.359436  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:51.575361  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:51.765873  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:51.766817  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:51.858668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:52.080256  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:52.266250  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:52.266408  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:52.359401  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:52.575008  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:52.765508  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:52.765663  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:52.859394  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:53.075168  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:53.266048  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:53.266578  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:53.359177  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:53.574814  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:53.766137  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:53.766529  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:53.863552  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:54.084279  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:54.266087  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:54.266219  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:54.359229  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:54.575265  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:54.767128  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:54.767565  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:54.859632  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:55.079052  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:55.266754  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:55.267235  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:55.359406  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:55.575419  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:55.766127  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:55.766445  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:55.866667  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:56.081432  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:56.276461  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:56.277455  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:56.359807  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:56.575671  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:56.765425  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:56.765572  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:56.859408  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:57.078081  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:57.265547  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:57.265684  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:57.358997  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:57.575494  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:57.767204  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:57.767656  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:57.859046  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:58.081477  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:58.266659  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:58.272008  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:58.367922  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:58.577829  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:58.765909  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:58.766062  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:58.859015  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:59.095952  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:59.265499  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:59.266209  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:59.359476  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:59.574418  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:59.765998  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:59.766106  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:59.859389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:00.214683  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:00.312468  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:00.312560  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:00.389482  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:00.575490  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:00.765692  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:00.766480  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:00.860221  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:01.081337  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:01.266569  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:01.266730  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:01.373186  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:01.577489  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:01.766094  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:01.767338  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:01.859699  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:02.081495  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:02.266346  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:02.266751  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:02.360992  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:02.585347  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:02.765606  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:02.766539  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:02.859598  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:03.082830  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:03.266957  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:03.268332  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:03.359084  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:03.575349  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:03.768301  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:03.768695  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:03.867688  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:04.082861  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:04.266230  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:04.267647  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:04.359323  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:04.574526  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:04.765995  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:04.766125  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:04.859494  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:05.081774  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:05.267254  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:05.267808  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:05.358750  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:05.575540  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:05.765840  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:05.766089  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:05.866088  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:06.094225  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:06.296428  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:06.296589  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:06.383588  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:06.575187  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:06.774246  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:06.774335  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:06.859092  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:07.090617  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:07.266952  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:07.267358  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:07.358593  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:07.575389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:07.766751  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:07.766886  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:07.858741  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:08.079480  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:08.265203  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:08.265649  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:08.359418  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:08.574933  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:08.765775  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:08.766194  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:08.858922  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:09.086272  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:09.266265  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:09.266496  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:09.359388  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:09.575033  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:09.766637  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:09.768105  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:09.859734  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:10.079956  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:10.267137  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:10.267485  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:10.359462  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:10.575361  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:10.767089  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:10.767408  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:10.859420  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:11.081779  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:11.266666  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:11.267086  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:11.366489  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:11.576551  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:11.766226  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:11.766682  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:11.860067  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:12.080261  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:12.267362  365855 kapi.go:107] duration metric: took 1m9.005813365s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 20:12:12.268014  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:12.367845  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:12.575973  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:12.766461  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:12.859997  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:13.079643  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:13.266288  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:13.359513  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:13.581057  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:13.767784  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:13.858930  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:14.083111  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:14.266599  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:14.360777  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:14.575674  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:14.764737  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:14.859437  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:15.075693  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:15.265439  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:15.359746  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:15.575942  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:15.765114  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:15.859104  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:16.082368  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:16.265566  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:16.360216  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:16.574857  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:16.765456  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:16.859926  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:17.080860  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:17.265918  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:17.358876  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:17.576026  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:17.765686  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:17.860187  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:18.078314  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:18.265710  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:18.360469  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:18.575554  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:18.764995  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:18.859065  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:19.080457  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:19.266161  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:19.358902  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:19.576272  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:19.766225  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:19.859304  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:20.081220  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:20.278844  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:20.372560  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:20.576064  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:20.767310  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:20.859540  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:21.090518  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:21.271209  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:21.361938  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:21.576248  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:21.768439  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:21.861402  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:22.084186  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:22.266279  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:22.363021  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:22.575802  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:22.765191  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:22.859237  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:23.083133  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:23.266240  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:23.367496  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:23.575259  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:23.765580  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:23.859760  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:24.078856  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:24.265484  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:24.359553  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:24.575906  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:24.765651  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:24.858666  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:25.080329  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:25.265900  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:25.359405  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:25.574499  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:25.765866  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:25.859001  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:26.089125  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:26.269211  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:26.359641  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:26.576417  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:26.765377  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:26.860141  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:27.080776  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:27.266829  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:27.359080  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:27.577899  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:27.765552  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:27.860915  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:28.083178  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:28.265367  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:28.359081  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:28.575081  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:28.765594  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:28.859245  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:29.078465  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:29.265727  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:29.360044  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:29.575674  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:29.765318  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:29.859282  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:30.110560  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:30.265345  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:30.359169  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:30.575016  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:30.765476  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:30.859614  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:31.082672  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:31.265445  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:31.359721  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:31.576416  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:31.766061  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:31.859714  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:32.082695  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:32.264945  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:32.359058  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:32.576595  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:32.765629  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:32.870266  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:33.080659  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:33.267991  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:33.367049  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:33.575662  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:33.764992  365855 kapi.go:107] duration metric: took 1m30.50343335s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 20:12:33.859035  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:34.078936  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:34.359436  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:34.589891  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:34.859890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:35.078973  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:35.359389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:35.575173  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:35.859668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:36.076525  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:36.360788  365855 kapi.go:107] duration metric: took 1m29.50505875s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 20:12:36.363764  365855 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-603031 cluster.
	I1212 20:12:36.366615  365855 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 20:12:36.369575  365855 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 20:12:36.575952  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:37.080850  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:37.575026  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:38.075706  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:38.575065  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:39.083758  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:39.574629  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:40.076573  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:40.574797  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:41.078597  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:41.575914  365855 kapi.go:107] duration metric: took 1m38.004754392s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 20:12:41.579090  365855 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, nvidia-device-plugin, registry-creds, storage-provisioner, metrics-server, storage-provisioner-rancher, inspektor-gadget, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1212 20:12:41.581832  365855 addons.go:530] duration metric: took 1m44.842391025s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner nvidia-device-plugin registry-creds storage-provisioner metrics-server storage-provisioner-rancher inspektor-gadget default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1212 20:12:41.581908  365855 start.go:247] waiting for cluster config update ...
	I1212 20:12:41.581930  365855 start.go:256] writing updated cluster config ...
	I1212 20:12:41.582277  365855 ssh_runner.go:195] Run: rm -f paused
	I1212 20:12:41.587501  365855 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:41.591444  365855 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9rqzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.598300  365855 pod_ready.go:94] pod "coredns-66bc5c9577-9rqzw" is "Ready"
	I1212 20:12:41.598376  365855 pod_ready.go:86] duration metric: took 6.85569ms for pod "coredns-66bc5c9577-9rqzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.601478  365855 pod_ready.go:83] waiting for pod "etcd-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.607063  365855 pod_ready.go:94] pod "etcd-addons-603031" is "Ready"
	I1212 20:12:41.607131  365855 pod_ready.go:86] duration metric: took 5.585654ms for pod "etcd-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.609807  365855 pod_ready.go:83] waiting for pod "kube-apiserver-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.615323  365855 pod_ready.go:94] pod "kube-apiserver-addons-603031" is "Ready"
	I1212 20:12:41.615398  365855 pod_ready.go:86] duration metric: took 5.516131ms for pod "kube-apiserver-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.618608  365855 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.991965  365855 pod_ready.go:94] pod "kube-controller-manager-addons-603031" is "Ready"
	I1212 20:12:41.991998  365855 pod_ready.go:86] duration metric: took 373.32438ms for pod "kube-controller-manager-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:42.193320  365855 pod_ready.go:83] waiting for pod "kube-proxy-6c94h" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:42.592069  365855 pod_ready.go:94] pod "kube-proxy-6c94h" is "Ready"
	I1212 20:12:42.592098  365855 pod_ready.go:86] duration metric: took 398.743564ms for pod "kube-proxy-6c94h" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:42.792840  365855 pod_ready.go:83] waiting for pod "kube-scheduler-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:43.192157  365855 pod_ready.go:94] pod "kube-scheduler-addons-603031" is "Ready"
	I1212 20:12:43.192226  365855 pod_ready.go:86] duration metric: took 399.359275ms for pod "kube-scheduler-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:43.192248  365855 pod_ready.go:40] duration metric: took 1.604671963s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:43.246563  365855 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 20:12:43.250274  365855 out.go:179] * Done! kubectl is now configured to use "addons-603031" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 20:14:52 addons-603031 crio[829]: time="2025-12-12T20:14:52.618120794Z" level=info msg="Removed pod sandbox: a21e1326f26bc57e65e44a175f26b9201c5cf111f639adeedfc5967eff06b1ef" id=bb6192c6-9c59-4f57-acea-d0189ffaee80 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.312119704Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-88wcz/POD" id=4558e1be-5b60-468d-b873-588ab23a6b76 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.312199803Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.335682574Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-88wcz Namespace:default ID:e10980c47c4f78cd371b7b60d6ac2c76fbed92aaef954132f92620b59ba1f6fe UID:5e421335-a6c9-477c-8604-c6bc52219493 NetNS:/var/run/netns/67d412f6-fdb5-4a46-b7d2-12f6cca07f23 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079df0}] Aliases:map[]}"
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.363884Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-88wcz to CNI network \"kindnet\" (type=ptp)"
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.3813922Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-88wcz Namespace:default ID:e10980c47c4f78cd371b7b60d6ac2c76fbed92aaef954132f92620b59ba1f6fe UID:5e421335-a6c9-477c-8604-c6bc52219493 NetNS:/var/run/netns/67d412f6-fdb5-4a46-b7d2-12f6cca07f23 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x4000079df0}] Aliases:map[]}"
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.381549207Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-88wcz for CNI network kindnet (type=ptp)"
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.387100216Z" level=info msg="Ran pod sandbox e10980c47c4f78cd371b7b60d6ac2c76fbed92aaef954132f92620b59ba1f6fe with infra container: default/hello-world-app-5d498dc89-88wcz/POD" id=4558e1be-5b60-468d-b873-588ab23a6b76 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.392073726Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a1d6eca6-579f-4201-86cc-d2a7a8a35bac name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.392710722Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=a1d6eca6-579f-4201-86cc-d2a7a8a35bac name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.392875983Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=a1d6eca6-579f-4201-86cc-d2a7a8a35bac name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.396962008Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c44a7d5c-f75f-4b0c-9b65-da3a7cbf4c05 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:15:42 addons-603031 crio[829]: time="2025-12-12T20:15:42.404072903Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.109366396Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b" id=c44a7d5c-f75f-4b0c-9b65-da3a7cbf4c05 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.110069936Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=360b33e8-a7b0-4c29-a3bb-e3e60c2c3fca name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.112081322Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=c19c16fb-d422-4a21-be68-682985331169 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.118438518Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-88wcz/hello-world-app" id=c65a8b09-760c-4f4d-af56-7671ddce4064 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.118684354Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.141187234Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.141710497Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b153231940575aed2693725838ea843e8802e538e87d62795006782092f79f98/merged/etc/passwd: no such file or directory"
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.141816131Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b153231940575aed2693725838ea843e8802e538e87d62795006782092f79f98/merged/etc/group: no such file or directory"
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.142169792Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.167018116Z" level=info msg="Created container 4bb5db4802878bacbf907ef9895b253f4968c25ea205b17a14e32d9f7cab5d13: default/hello-world-app-5d498dc89-88wcz/hello-world-app" id=c65a8b09-760c-4f4d-af56-7671ddce4064 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.168257218Z" level=info msg="Starting container: 4bb5db4802878bacbf907ef9895b253f4968c25ea205b17a14e32d9f7cab5d13" id=28f9121d-133f-436d-a3f7-842ffad5a0d4 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:15:43 addons-603031 crio[829]: time="2025-12-12T20:15:43.173251216Z" level=info msg="Started container" PID=6906 containerID=4bb5db4802878bacbf907ef9895b253f4968c25ea205b17a14e32d9f7cab5d13 description=default/hello-world-app-5d498dc89-88wcz/hello-world-app id=28f9121d-133f-436d-a3f7-842ffad5a0d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e10980c47c4f78cd371b7b60d6ac2c76fbed92aaef954132f92620b59ba1f6fe
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	4bb5db4802878       docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b                                        Less than a second ago   Running             hello-world-app                          0                   e10980c47c4f7       hello-world-app-5d498dc89-88wcz             default
	245b170a8bfc2       public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d                                           2 minutes ago            Running             nginx                                    0                   aa9c3331d8272       nginx                                       default
	18f984869b4fb       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          2 minutes ago            Running             busybox                                  0                   1c22e8b87cea5       busybox                                     default
	2fd403a0a3c1f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          3 minutes ago            Running             csi-snapshotter                          0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	4ad63355a4185       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          3 minutes ago            Running             csi-provisioner                          0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	38a91c939e267       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            3 minutes ago            Running             liveness-probe                           0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	de7b51c83e158       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           3 minutes ago            Running             hostpath                                 0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	6b2e11d8ab454       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 3 minutes ago            Running             gcp-auth                                 0                   1fd2aa9552077       gcp-auth-78565c9fb4-fm95l                   gcp-auth
	b4b133c92eb73       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             3 minutes ago            Running             controller                               0                   aca915a799585       ingress-nginx-controller-85d4c799dd-xvdhs   ingress-nginx
	682df2b59c950       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            3 minutes ago            Running             gadget                                   0                   aedd69022f38e       gadget-ldgrj                                gadget
	4809fb232f668       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                3 minutes ago            Running             node-driver-registrar                    0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	a82f1ede56743       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               3 minutes ago            Running             minikube-ingress-dns                     0                   eb68379552547       kube-ingress-dns-minikube                   kube-system
	9c3118d5851c9       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             3 minutes ago            Exited              patch                                    2                   3fea644d7881e       ingress-nginx-admission-patch-9v2hg         ingress-nginx
	f53fc93dd83c0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              3 minutes ago            Running             registry-proxy                           0                   0b6cb28584b7b       registry-proxy-2ppkm                        kube-system
	e415e482778c5       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             3 minutes ago            Running             csi-attacher                             0                   4884db98bbc3d       csi-hostpath-attacher-0                     kube-system
	d5c2cf4090c13       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   4423bef11c1a1       snapshot-controller-7d9fbc56b8-bbnmg        kube-system
	74073eb172ae6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   3 minutes ago            Exited              create                                   0                   bd40fbc5d9bdf       ingress-nginx-admission-create-szxsl        ingress-nginx
	9fac64cdd9389       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           3 minutes ago            Running             registry                                 0                   bbae55c095373       registry-6b586f9694-7qdmt                   kube-system
	bc9cffb778ec6       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               3 minutes ago            Running             cloud-spanner-emulator                   0                   224c78c64804d       cloud-spanner-emulator-5bdddb765-cfmxl      default
	bfb13326e68f2       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   0d3c5e66e6897       nvidia-device-plugin-daemonset-sthfk        kube-system
	421200960de75       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   3 minutes ago            Running             csi-external-health-monitor-controller   0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	f3c43f32965a1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      3 minutes ago            Running             volume-snapshot-controller               0                   4f93b1a079d55       snapshot-controller-7d9fbc56b8-5jcb6        kube-system
	ce0ade5e7b384       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              3 minutes ago            Running             csi-resizer                              0                   c7d04fbcab4b4       csi-hostpath-resizer-0                      kube-system
	a11300ef5861d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             3 minutes ago            Running             local-path-provisioner                   0                   7fa05a10ffe57       local-path-provisioner-648f6765c9-c98md     local-path-storage
	1244a5a603a61       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              3 minutes ago            Running             yakd                                     0                   1f7d8bd186c41       yakd-dashboard-5ff678cb9-v447b              yakd-dashboard
	bdd23d655fa55       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        4 minutes ago            Running             metrics-server                           0                   a7b036bac3c0e       metrics-server-85b7d694d7-q8cmr             kube-system
	dc26db242e241       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             4 minutes ago            Running             coredns                                  0                   006bc26307510       coredns-66bc5c9577-9rqzw                    kube-system
	e1266d6c75a1e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             4 minutes ago            Running             storage-provisioner                      0                   92df0428f10b7       storage-provisioner                         kube-system
	f05b6cd78460f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             4 minutes ago            Running             kindnet-cni                              0                   73e7a3b46adfc       kindnet-2dtkn                               kube-system
	d5b835e400afb       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             4 minutes ago            Running             kube-proxy                               0                   f7a64e29d4a80       kube-proxy-6c94h                            kube-system
	6b921948e7a2b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             4 minutes ago            Running             kube-scheduler                           0                   255b97b30c218       kube-scheduler-addons-603031                kube-system
	389edf543c495       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             4 minutes ago            Running             kube-controller-manager                  0                   9078f5134f4dd       kube-controller-manager-addons-603031       kube-system
	e4de15886f671       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             4 minutes ago            Running             etcd                                     0                   7219ed53321da       etcd-addons-603031                          kube-system
	53fcf67696a94       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             4 minutes ago            Running             kube-apiserver                           0                   f469cb4a159bb       kube-apiserver-addons-603031                kube-system
	
	
	==> coredns [dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd] <==
	[INFO] 10.244.0.18:41736 - 54604 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001713315s
	[INFO] 10.244.0.18:41736 - 28307 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126131s
	[INFO] 10.244.0.18:41736 - 27888 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000183042s
	[INFO] 10.244.0.18:60608 - 43013 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162858s
	[INFO] 10.244.0.18:60608 - 43507 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088198s
	[INFO] 10.244.0.18:39190 - 2110 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119181s
	[INFO] 10.244.0.18:39190 - 1905 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008535s
	[INFO] 10.244.0.18:38121 - 6508 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092678s
	[INFO] 10.244.0.18:38121 - 6328 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000143805s
	[INFO] 10.244.0.18:34529 - 11294 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001166232s
	[INFO] 10.244.0.18:34529 - 11738 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001082088s
	[INFO] 10.244.0.18:57201 - 37741 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106454s
	[INFO] 10.244.0.18:57201 - 37610 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085884s
	[INFO] 10.244.0.21:41906 - 18684 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000228302s
	[INFO] 10.244.0.21:37023 - 10435 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000100637s
	[INFO] 10.244.0.21:48668 - 58272 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280389s
	[INFO] 10.244.0.21:55727 - 3722 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016467s
	[INFO] 10.244.0.21:33592 - 58357 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180384s
	[INFO] 10.244.0.21:33038 - 21713 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099611s
	[INFO] 10.244.0.21:45901 - 10876 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00256363s
	[INFO] 10.244.0.21:57361 - 61820 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002532237s
	[INFO] 10.244.0.21:38930 - 7311 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002201074s
	[INFO] 10.244.0.21:56790 - 39448 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002486305s
	[INFO] 10.244.0.23:41925 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000319126s
	[INFO] 10.244.0.23:39609 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000145191s
	
	
	==> describe nodes <==
	Name:               addons-603031
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-603031
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=addons-603031
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-603031
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-603031"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-603031
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:15:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:13:56 +0000   Fri, 12 Dec 2025 20:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:13:56 +0000   Fri, 12 Dec 2025 20:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:13:56 +0000   Fri, 12 Dec 2025 20:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:13:56 +0000   Fri, 12 Dec 2025 20:11:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-603031
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                d0303866-b2d5-479a-a0a7-1e376c628274
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
	  default                     cloud-spanner-emulator-5bdddb765-cfmxl       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m44s
	  default                     hello-world-app-5d498dc89-88wcz              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-ldgrj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  gcp-auth                    gcp-auth-78565c9fb4-fm95l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m38s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-xvdhs    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         4m41s
	  kube-system                 coredns-66bc5c9577-9rqzw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m47s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 csi-hostpathplugin-5b869                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 etcd-addons-603031                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m52s
	  kube-system                 kindnet-2dtkn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m47s
	  kube-system                 kube-apiserver-addons-603031                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-controller-manager-addons-603031        200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m52s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 kube-proxy-6c94h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-scheduler-addons-603031                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 metrics-server-85b7d694d7-q8cmr              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         4m43s
	  kube-system                 nvidia-device-plugin-daemonset-sthfk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 registry-6b586f9694-7qdmt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  kube-system                 registry-creds-764b6fb674-7zll2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m45s
	  kube-system                 registry-proxy-2ppkm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-5jcb6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 snapshot-controller-7d9fbc56b8-bbnmg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m43s
	  local-path-storage          local-path-provisioner-648f6765c9-c98md      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v447b               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     4m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m44s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  4m59s (x8 over 4m59s)  kubelet          Node addons-603031 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m59s (x8 over 4m59s)  kubelet          Node addons-603031 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m59s (x8 over 4m59s)  kubelet          Node addons-603031 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 4m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  4m52s                  kubelet          Node addons-603031 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m52s                  kubelet          Node addons-603031 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m52s                  kubelet          Node addons-603031 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           4m48s                  node-controller  Node addons-603031 event: Registered Node addons-603031 in Controller
	  Normal   NodeReady                4m5s                   kubelet          Node addons-603031 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6] <==
	{"level":"warn","ts":"2025-12-12T20:10:48.222876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.248837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.297611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.304938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.319028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.341018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.360815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.378412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.394557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.438087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.440355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.465084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.479204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.516016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.535545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.561777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.577536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.598577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.700814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:03.857092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:03.868791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.439220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.455026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.503394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.518728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [6b2e11d8ab454cbe479a492a15f224b1b0c5b514bf7e21f46cc09b191184004a] <==
	2025/12/12 20:12:35 GCP Auth Webhook started!
	2025/12/12 20:12:43 Ready to marshal response ...
	2025/12/12 20:12:43 Ready to write response ...
	2025/12/12 20:12:43 Ready to marshal response ...
	2025/12/12 20:12:43 Ready to write response ...
	2025/12/12 20:12:44 Ready to marshal response ...
	2025/12/12 20:12:44 Ready to write response ...
	2025/12/12 20:13:04 Ready to marshal response ...
	2025/12/12 20:13:04 Ready to write response ...
	2025/12/12 20:13:09 Ready to marshal response ...
	2025/12/12 20:13:09 Ready to write response ...
	2025/12/12 20:13:09 Ready to marshal response ...
	2025/12/12 20:13:09 Ready to write response ...
	2025/12/12 20:13:18 Ready to marshal response ...
	2025/12/12 20:13:18 Ready to write response ...
	2025/12/12 20:13:22 Ready to marshal response ...
	2025/12/12 20:13:22 Ready to write response ...
	2025/12/12 20:13:32 Ready to marshal response ...
	2025/12/12 20:13:32 Ready to write response ...
	2025/12/12 20:13:47 Ready to marshal response ...
	2025/12/12 20:13:47 Ready to write response ...
	2025/12/12 20:15:41 Ready to marshal response ...
	2025/12/12 20:15:41 Ready to write response ...
	
	
	==> kernel <==
	 20:15:44 up  2:58,  0 user,  load average: 1.22, 2.29, 2.04
	Linux addons-603031 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896] <==
	I1212 20:13:39.531427       1 main.go:301] handling current node
	I1212 20:13:49.530848       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:13:49.530881       1 main.go:301] handling current node
	I1212 20:13:59.531558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:13:59.531775       1 main.go:301] handling current node
	I1212 20:14:09.536088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:14:09.536125       1 main.go:301] handling current node
	I1212 20:14:19.535153       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:14:19.535194       1 main.go:301] handling current node
	I1212 20:14:29.532517       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:14:29.532554       1 main.go:301] handling current node
	I1212 20:14:39.534385       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:14:39.534507       1 main.go:301] handling current node
	I1212 20:14:49.536462       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:14:49.536564       1 main.go:301] handling current node
	I1212 20:14:59.536633       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:14:59.536733       1 main.go:301] handling current node
	I1212 20:15:09.530301       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:15:09.530595       1 main.go:301] handling current node
	I1212 20:15:19.530360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:15:19.530403       1 main.go:301] handling current node
	I1212 20:15:29.533156       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:15:29.533270       1 main.go:301] handling current node
	I1212 20:15:39.534877       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:15:39.534920       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6] <==
	W1212 20:11:26.455013       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 20:11:26.503364       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 20:11:26.518729       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 20:11:39.854638       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.221.218:443: connect: connection refused
	E1212 20:11:39.854755       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.221.218:443: connect: connection refused" logger="UnhandledError"
	W1212 20:11:39.855264       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.221.218:443: connect: connection refused
	E1212 20:11:39.855380       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.221.218:443: connect: connection refused" logger="UnhandledError"
	W1212 20:11:39.946369       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.221.218:443: connect: connection refused
	E1212 20:11:39.946415       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.221.218:443: connect: connection refused" logger="UnhandledError"
	W1212 20:11:45.748762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 20:11:45.748899       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1212 20:11:45.750834       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.106.141:443: connect: connection refused" logger="UnhandledError"
	E1212 20:11:45.752007       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.106.141:443: connect: connection refused" logger="UnhandledError"
	I1212 20:11:45.865650       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 20:12:53.461609       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54952: use of closed network connection
	E1212 20:12:53.612253       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54970: use of closed network connection
	I1212 20:13:21.763122       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1212 20:13:22.097293       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.224.7"}
	I1212 20:13:39.938271       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	E1212 20:13:41.645341       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1212 20:13:54.975944       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	I1212 20:15:42.158364       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.154.146"}
	
	
	==> kube-controller-manager [389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c] <==
	I1212 20:10:56.452720       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 20:10:56.452744       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 20:10:56.452749       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 20:10:56.452754       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 20:10:56.461828       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-603031" podCIDRs=["10.244.0.0/24"]
	I1212 20:10:56.461837       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 20:10:56.467616       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:10:56.469083       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 20:10:56.469053       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 20:10:56.469294       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 20:10:56.469368       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:10:56.469921       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 20:10:56.470737       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 20:10:56.471129       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 20:10:56.471668       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 20:10:56.475582       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1212 20:11:01.726060       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1212 20:11:26.429942       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 20:11:26.430108       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1212 20:11:26.430156       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1212 20:11:26.483835       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1212 20:11:26.493294       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 20:11:26.530788       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:11:26.595275       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:11:41.422821       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165] <==
	I1212 20:10:59.362862       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:10:59.485423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:10:59.588529       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:10:59.592137       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 20:10:59.592236       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:10:59.661925       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:10:59.661982       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:10:59.669465       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:10:59.670109       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:10:59.670128       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:10:59.677706       1 config.go:200] "Starting service config controller"
	I1212 20:10:59.677723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:10:59.677871       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:10:59.677875       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:10:59.678052       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:10:59.678057       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:10:59.685652       1 config.go:309] "Starting node config controller"
	I1212 20:10:59.686140       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:10:59.686152       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:10:59.777928       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:10:59.778017       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:10:59.778250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3] <==
	I1212 20:10:50.823442       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:10:50.826062       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:10:50.826755       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:10:50.826781       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:10:50.835917       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 20:10:50.839834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 20:10:50.840294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 20:10:50.840461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:10:50.840631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 20:10:50.840700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 20:10:50.840786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 20:10:50.840857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 20:10:50.840896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:10:50.840941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 20:10:50.841054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 20:10:50.841130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 20:10:50.841177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 20:10:50.841254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 20:10:50.841290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 20:10:50.841368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 20:10:50.841416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:10:50.841551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 20:10:50.841602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:10:50.841669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1212 20:10:52.336594       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.571591    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzqsx\" (UniqueName: \"kubernetes.io/projected/b442831a-9663-404f-bfda-eddf0e37f06e-kube-api-access-dzqsx\") pod \"b442831a-9663-404f-bfda-eddf0e37f06e\" (UID: \"b442831a-9663-404f-bfda-eddf0e37f06e\") "
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.576637    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b442831a-9663-404f-bfda-eddf0e37f06e-kube-api-access-dzqsx" (OuterVolumeSpecName: "kube-api-access-dzqsx") pod "b442831a-9663-404f-bfda-eddf0e37f06e" (UID: "b442831a-9663-404f-bfda-eddf0e37f06e"). InnerVolumeSpecName "kube-api-access-dzqsx". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.656824    1291 scope.go:117] "RemoveContainer" containerID="9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f"
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.666392    1291 scope.go:117] "RemoveContainer" containerID="9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f"
	Dec 12 20:13:54 addons-603031 kubelet[1291]: E1212 20:13:54.666893    1291 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f\": container with ID starting with 9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f not found: ID does not exist" containerID="9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f"
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.666956    1291 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f"} err="failed to get container status \"9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f\": rpc error: code = NotFound desc = could not find container \"9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f\": container with ID starting with 9b8da31aa8fe91570d4ff3da2cc5b175828375ce8a36cf681e3c2400cadf171f not found: ID does not exist"
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.672497    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^104188d5-d797-11f0-8090-7e0d28d5336e\") pod \"b442831a-9663-404f-bfda-eddf0e37f06e\" (UID: \"b442831a-9663-404f-bfda-eddf0e37f06e\") "
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.672585    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b442831a-9663-404f-bfda-eddf0e37f06e-gcp-creds\") pod \"b442831a-9663-404f-bfda-eddf0e37f06e\" (UID: \"b442831a-9663-404f-bfda-eddf0e37f06e\") "
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.672700    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dzqsx\" (UniqueName: \"kubernetes.io/projected/b442831a-9663-404f-bfda-eddf0e37f06e-kube-api-access-dzqsx\") on node \"addons-603031\" DevicePath \"\""
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.672740    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b442831a-9663-404f-bfda-eddf0e37f06e-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "b442831a-9663-404f-bfda-eddf0e37f06e" (UID: "b442831a-9663-404f-bfda-eddf0e37f06e"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.681160    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^104188d5-d797-11f0-8090-7e0d28d5336e" (OuterVolumeSpecName: "task-pv-storage") pod "b442831a-9663-404f-bfda-eddf0e37f06e" (UID: "b442831a-9663-404f-bfda-eddf0e37f06e"). InnerVolumeSpecName "pvc-ec97c199-ec94-4201-b81f-c8cab1bf0cb8". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.774060    1291 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-ec97c199-ec94-4201-b81f-c8cab1bf0cb8\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^104188d5-d797-11f0-8090-7e0d28d5336e\") on node \"addons-603031\" "
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.774109    1291 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/b442831a-9663-404f-bfda-eddf0e37f06e-gcp-creds\") on node \"addons-603031\" DevicePath \"\""
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.779328    1291 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-ec97c199-ec94-4201-b81f-c8cab1bf0cb8" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^104188d5-d797-11f0-8090-7e0d28d5336e") on node "addons-603031"
	Dec 12 20:13:54 addons-603031 kubelet[1291]: I1212 20:13:54.874424    1291 reconciler_common.go:299] "Volume detached for volume \"pvc-ec97c199-ec94-4201-b81f-c8cab1bf0cb8\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^104188d5-d797-11f0-8090-7e0d28d5336e\") on node \"addons-603031\" DevicePath \"\""
	Dec 12 20:13:56 addons-603031 kubelet[1291]: I1212 20:13:56.425238    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b442831a-9663-404f-bfda-eddf0e37f06e" path="/var/lib/kubelet/pods/b442831a-9663-404f-bfda-eddf0e37f06e/volumes"
	Dec 12 20:14:18 addons-603031 kubelet[1291]: I1212 20:14:18.423106    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sthfk" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 20:14:34 addons-603031 kubelet[1291]: I1212 20:14:34.422802    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2ppkm" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 20:14:42 addons-603031 kubelet[1291]: I1212 20:14:42.422957    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-7qdmt" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 20:14:52 addons-603031 kubelet[1291]: E1212 20:14:52.614406    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/13ebb4538ad9384860f7d4408c0ada025de423324173a9526a801a3d8a861ee7/diff" to get inode usage: stat /var/lib/containers/storage/overlay/13ebb4538ad9384860f7d4408c0ada025de423324173a9526a801a3d8a861ee7/diff: no such file or directory, extraDiskErr: <nil>
	Dec 12 20:15:42 addons-603031 kubelet[1291]: I1212 20:15:42.129668    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5e421335-a6c9-477c-8604-c6bc52219493-gcp-creds\") pod \"hello-world-app-5d498dc89-88wcz\" (UID: \"5e421335-a6c9-477c-8604-c6bc52219493\") " pod="default/hello-world-app-5d498dc89-88wcz"
	Dec 12 20:15:42 addons-603031 kubelet[1291]: I1212 20:15:42.129760    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbx4\" (UniqueName: \"kubernetes.io/projected/5e421335-a6c9-477c-8604-c6bc52219493-kube-api-access-7cbx4\") pod \"hello-world-app-5d498dc89-88wcz\" (UID: \"5e421335-a6c9-477c-8604-c6bc52219493\") " pod="default/hello-world-app-5d498dc89-88wcz"
	Dec 12 20:15:44 addons-603031 kubelet[1291]: I1212 20:15:44.067565    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-88wcz" podStartSLOduration=2.350864048 podStartE2EDuration="3.067541696s" podCreationTimestamp="2025-12-12 20:15:41 +0000 UTC" firstStartedPulling="2025-12-12 20:15:42.394453559 +0000 UTC m=+290.115630273" lastFinishedPulling="2025-12-12 20:15:43.111131207 +0000 UTC m=+290.832307921" observedRunningTime="2025-12-12 20:15:44.066201547 +0000 UTC m=+291.787378285" watchObservedRunningTime="2025-12-12 20:15:44.067541696 +0000 UTC m=+291.788718418"
	Dec 12 20:15:44 addons-603031 kubelet[1291]: I1212 20:15:44.422554    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-2ppkm" secret="" err="secret \"gcp-auth\" not found"
	Dec 12 20:15:44 addons-603031 kubelet[1291]: I1212 20:15:44.423564    1291 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-sthfk" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22] <==
	W1212 20:15:20.251935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:22.255357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:22.262253       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:24.265447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:24.270121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:26.273258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:26.277535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:28.281210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:28.287942       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:30.290625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:30.295385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:32.298382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:32.303022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:34.306373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:34.313375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:36.316884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:36.321584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:38.324108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:38.330861       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:40.334387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:40.341341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:42.351842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:42.363816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:44.367108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:15:44.372325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-603031 -n addons-603031
helpers_test.go:270: (dbg) Run:  kubectl --context addons-603031 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-603031 describe pod ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-603031 describe pod ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2: exit status 1 (92.559277ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-szxsl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9v2hg" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7zll2" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-603031 describe pod ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (277.183425ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:15:45.588560  375238 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:15:45.589881  375238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:15:45.589897  375238 out.go:374] Setting ErrFile to fd 2...
	I1212 20:15:45.589903  375238 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:15:45.590185  375238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:15:45.590493  375238 mustload.go:66] Loading cluster: addons-603031
	I1212 20:15:45.590924  375238 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:15:45.590947  375238 addons.go:622] checking whether the cluster is paused
	I1212 20:15:45.591063  375238 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:15:45.591078  375238 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:15:45.591651  375238 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:15:45.609362  375238 ssh_runner.go:195] Run: systemctl --version
	I1212 20:15:45.609448  375238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:15:45.626686  375238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:15:45.735277  375238 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:15:45.735405  375238 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:15:45.770279  375238 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:15:45.770308  375238 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:15:45.770313  375238 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:15:45.770318  375238 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:15:45.770322  375238 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:15:45.770325  375238 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:15:45.770329  375238 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:15:45.770332  375238 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:15:45.770335  375238 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:15:45.770342  375238 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:15:45.770345  375238 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:15:45.770348  375238 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:15:45.770351  375238 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:15:45.770354  375238 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:15:45.770357  375238 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:15:45.770363  375238 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:15:45.770370  375238 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:15:45.770374  375238 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:15:45.770377  375238 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:15:45.770380  375238 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:15:45.770385  375238 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:15:45.770387  375238 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:15:45.770390  375238 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:15:45.770393  375238 cri.go:89] found id: ""
	I1212 20:15:45.770451  375238 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:15:45.786989  375238 out.go:203] 
	W1212 20:15:45.789803  375238 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:15:45Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:15:45Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:15:45.789837  375238 out.go:285] * 
	* 
	W1212 20:15:45.794887  375238 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:15:45.797827  375238 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable ingress --alsologtostderr -v=1: exit status 11 (270.698737ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:15:45.857324  375284 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:15:45.858059  375284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:15:45.858098  375284 out.go:374] Setting ErrFile to fd 2...
	I1212 20:15:45.858119  375284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:15:45.858425  375284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:15:45.858767  375284 mustload.go:66] Loading cluster: addons-603031
	I1212 20:15:45.859199  375284 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:15:45.859248  375284 addons.go:622] checking whether the cluster is paused
	I1212 20:15:45.859385  375284 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:15:45.859420  375284 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:15:45.860005  375284 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:15:45.877446  375284 ssh_runner.go:195] Run: systemctl --version
	I1212 20:15:45.877518  375284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:15:45.895735  375284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:15:46.008451  375284 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:15:46.008547  375284 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:15:46.041884  375284 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:15:46.041909  375284 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:15:46.041914  375284 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:15:46.041918  375284 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:15:46.041922  375284 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:15:46.041926  375284 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:15:46.041929  375284 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:15:46.041933  375284 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:15:46.041936  375284 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:15:46.041947  375284 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:15:46.041955  375284 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:15:46.041959  375284 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:15:46.041962  375284 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:15:46.041965  375284 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:15:46.041968  375284 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:15:46.041975  375284 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:15:46.041981  375284 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:15:46.041986  375284 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:15:46.041989  375284 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:15:46.041992  375284 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:15:46.041997  375284 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:15:46.042000  375284 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:15:46.042003  375284 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:15:46.042006  375284 cri.go:89] found id: ""
	I1212 20:15:46.042059  375284 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:15:46.056802  375284 out.go:203] 
	W1212 20:15:46.059674  375284 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:15:46Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:15:46Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:15:46.059700  375284 out.go:285] * 
	* 
	W1212 20:15:46.064785  375284 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:15:46.067813  375284 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (144.69s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-ldgrj" [398d7bc9-422d-4c72-a03e-b8c01f68b573] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003058296s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (265.737403ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:14:00.997389  374143 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:14:00.998472  374143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:14:00.998513  374143 out.go:374] Setting ErrFile to fd 2...
	I1212 20:14:00.998534  374143 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:14:00.998860  374143 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:14:00.999271  374143 mustload.go:66] Loading cluster: addons-603031
	I1212 20:14:00.999824  374143 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:14:00.999888  374143 addons.go:622] checking whether the cluster is paused
	I1212 20:14:01.000098  374143 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:14:01.000136  374143 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:14:01.001034  374143 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:14:01.019786  374143 ssh_runner.go:195] Run: systemctl --version
	I1212 20:14:01.019851  374143 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:14:01.037683  374143 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:14:01.143162  374143 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:14:01.143262  374143 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:14:01.181111  374143 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:14:01.181135  374143 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:14:01.181140  374143 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:14:01.181144  374143 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:14:01.181148  374143 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:14:01.181151  374143 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:14:01.181155  374143 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:14:01.181158  374143 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:14:01.181161  374143 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:14:01.181169  374143 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:14:01.181173  374143 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:14:01.181176  374143 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:14:01.181179  374143 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:14:01.181182  374143 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:14:01.181185  374143 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:14:01.181191  374143 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:14:01.181199  374143 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:14:01.181203  374143 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:14:01.181207  374143 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:14:01.181210  374143 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:14:01.181215  374143 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:14:01.181218  374143 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:14:01.181221  374143 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:14:01.181225  374143 cri.go:89] found id: ""
	I1212 20:14:01.181281  374143 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:14:01.197304  374143 out.go:203] 
	W1212 20:14:01.200461  374143 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:14:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:14:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:14:01.200490  374143 out.go:285] * 
	* 
	W1212 20:14:01.205943  374143 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:14:01.209076  374143 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 5.490103ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004387387s
addons_test.go:465: (dbg) Run:  kubectl --context addons-603031 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (261.437936ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:21.170521  373174 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:21.171462  373174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:21.171516  373174 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:21.171537  373174 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:21.171886  373174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:21.172327  373174 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:21.172805  373174 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:21.172899  373174 addons.go:622] checking whether the cluster is paused
	I1212 20:13:21.173135  373174 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:21.173174  373174 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:21.173736  373174 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:21.191645  373174 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:21.191716  373174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:21.209244  373174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:21.315201  373174 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:21.315296  373174 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:21.346668  373174 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:21.346691  373174 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:21.346696  373174 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:21.346713  373174 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:21.346717  373174 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:21.346721  373174 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:21.346725  373174 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:21.346728  373174 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:21.346731  373174 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:21.346740  373174 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:21.346746  373174 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:21.346750  373174 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:21.346753  373174 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:21.346756  373174 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:21.346758  373174 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:21.346763  373174 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:21.346766  373174 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:21.346770  373174 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:21.346773  373174 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:21.346776  373174 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:21.346780  373174 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:21.346782  373174 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:21.346785  373174 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:21.346788  373174 cri.go:89] found id: ""
	I1212 20:13:21.346844  373174 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:21.364036  373174 out.go:203] 
	W1212 20:13:21.367098  373174 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:21.367123  373174 out.go:285] * 
	* 
	W1212 20:13:21.372206  373174 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:21.375252  373174 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.37s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.54s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1212 20:13:18.405591  364853 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1212 20:13:18.409540  364853 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1212 20:13:18.409571  364853 kapi.go:107] duration metric: took 6.051931ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 6.065404ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-603031 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-603031 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [dad5ac43-11d3-447a-b814-48e398a88943] Pending
helpers_test.go:353: "task-pv-pod" [dad5ac43-11d3-447a-b814-48e398a88943] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003455123s
addons_test.go:574: (dbg) Run:  kubectl --context addons-603031 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-603031 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-603031 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-603031 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-603031 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-603031 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-603031 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [b442831a-9663-404f-bfda-eddf0e37f06e] Pending
helpers_test.go:353: "task-pv-pod-restore" [b442831a-9663-404f-bfda-eddf0e37f06e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [b442831a-9663-404f-bfda-eddf0e37f06e] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003882544s
addons_test.go:616: (dbg) Run:  kubectl --context addons-603031 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-603031 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-603031 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (265.061484ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:55.434896  374030 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:55.435738  374030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:55.435777  374030 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:55.435798  374030 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:55.436108  374030 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:55.436461  374030 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:55.436875  374030 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:55.436917  374030 addons.go:622] checking whether the cluster is paused
	I1212 20:13:55.437048  374030 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:55.437083  374030 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:55.437609  374030 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:55.454821  374030 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:55.454872  374030 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:55.474030  374030 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:55.579040  374030 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:55.579127  374030 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:55.615676  374030 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:55.615700  374030 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:55.615709  374030 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:55.615720  374030 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:55.615723  374030 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:55.615726  374030 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:55.615729  374030 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:55.615732  374030 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:55.615735  374030 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:55.615742  374030 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:55.615746  374030 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:55.615749  374030 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:55.615752  374030 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:55.615755  374030 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:55.615758  374030 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:55.615764  374030 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:55.615771  374030 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:55.615775  374030 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:55.615779  374030 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:55.615781  374030 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:55.615786  374030 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:55.615789  374030 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:55.615793  374030 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:55.615796  374030 cri.go:89] found id: ""
	I1212 20:13:55.615850  374030 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:55.634735  374030 out.go:203] 
	W1212 20:13:55.637830  374030 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:55.637855  374030 out.go:285] * 
	* 
	W1212 20:13:55.644004  374030 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:55.646886  374030 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (291.213949ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:55.705740  374076 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:55.706528  374076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:55.706543  374076 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:55.706548  374076 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:55.706808  374076 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:55.707118  374076 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:55.707502  374076 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:55.707521  374076 addons.go:622] checking whether the cluster is paused
	I1212 20:13:55.707626  374076 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:55.707639  374076 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:55.708203  374076 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:55.729043  374076 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:55.729097  374076 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:55.747105  374076 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:55.854898  374076 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:55.855038  374076 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:55.895451  374076 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:55.895481  374076 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:55.895486  374076 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:55.895489  374076 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:55.895502  374076 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:55.895507  374076 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:55.895510  374076 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:55.895513  374076 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:55.895535  374076 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:55.895546  374076 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:55.895550  374076 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:55.895554  374076 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:55.895560  374076 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:55.895564  374076 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:55.895585  374076 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:55.895599  374076 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:55.895603  374076 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:55.895618  374076 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:55.895624  374076 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:55.895632  374076 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:55.895638  374076 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:55.895642  374076 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:55.895663  374076 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:55.895674  374076 cri.go:89] found id: ""
	I1212 20:13:55.895755  374076 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:55.916522  374076 out.go:203] 
	W1212 20:13:55.923256  374076 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:55.923297  374076 out.go:285] * 
	* 
	W1212 20:13:55.928987  374076 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:55.935029  374076 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (37.54s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (3.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-603031 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable headlamp -p addons-603031 --alsologtostderr -v=1: exit status 11 (272.51149ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:12:53.935420  371965 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:53.936284  371965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:53.936328  371965 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:53.936350  371965 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:53.936783  371965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:12:53.937321  371965 mustload.go:66] Loading cluster: addons-603031
	I1212 20:12:53.937782  371965 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:53.937819  371965 addons.go:622] checking whether the cluster is paused
	I1212 20:12:53.937975  371965 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:53.938006  371965 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:12:53.938578  371965 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:12:53.957225  371965 ssh_runner.go:195] Run: systemctl --version
	I1212 20:12:53.957302  371965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:12:53.973966  371965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:12:54.087643  371965 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:12:54.087732  371965 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:12:54.122783  371965 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:12:54.122803  371965 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:12:54.122807  371965 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:12:54.122811  371965 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:12:54.122814  371965 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:12:54.122818  371965 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:12:54.122821  371965 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:12:54.122824  371965 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:12:54.122828  371965 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:12:54.122834  371965 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:12:54.122837  371965 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:12:54.122840  371965 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:12:54.122844  371965 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:12:54.122847  371965 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:12:54.122850  371965 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:12:54.122854  371965 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:12:54.122857  371965 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:12:54.122862  371965 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:12:54.122865  371965 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:12:54.122868  371965 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:12:54.122873  371965 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:12:54.122876  371965 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:12:54.122879  371965 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:12:54.122882  371965 cri.go:89] found id: ""
	I1212 20:12:54.122931  371965 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:54.138014  371965 out.go:203] 
	W1212 20:12:54.140962  371965 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:12:54.140988  371965 out.go:285] * 
	* 
	W1212 20:12:54.146068  371965 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:12:54.149041  371965 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-arm64 addons enable headlamp -p addons-603031 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-603031
helpers_test.go:244: (dbg) docker inspect addons-603031:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d",
	        "Created": "2025-12-12T20:10:25.50131524Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 366282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:10:25.588980896Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/hostname",
	        "HostsPath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/hosts",
	        "LogPath": "/var/lib/docker/containers/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d/a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d-json.log",
	        "Name": "/addons-603031",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-603031:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-603031",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a97e7cf8ec133a0fa19a567bb3c2858cd25a0df6be5352676d020f9da049289d",
	                "LowerDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6c8365389a915be1368c688b3c136baeaa82eaaf97aa1171231441e9576ffbba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-603031",
	                "Source": "/var/lib/docker/volumes/addons-603031/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-603031",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-603031",
	                "name.minikube.sigs.k8s.io": "addons-603031",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0a01ffc4a7512eb3bb17f18f3d2fb2ff623e6bdc5de8cbfda60b5df285c6f8f7",
	            "SandboxKey": "/var/run/docker/netns/0a01ffc4a751",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-603031": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:8f:d6:79:11:d1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1e609896b1bb2f4e51b06a0aeeafa65d36f83a53d2d0617984d4f134269288e0",
	                    "EndpointID": "f511be285609bfdbd3e0fdb964a3b44b32691d6a046b8ca24ee8b5bfa674bd82",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-603031",
	                        "a97e7cf8ec13"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-603031 -n addons-603031
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p addons-603031 logs -n 25: (1.453589749s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-220862 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-220862   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-220862                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-220862   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -o=json --download-only -p download-only-206451 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-206451   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-206451                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-206451   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -o=json --download-only -p download-only-527569 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                         │ download-only-527569   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-527569                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-527569   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-220862                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-220862   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-206451                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-206451   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-527569                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-527569   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ --download-only -p download-docker-584504 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-584504 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ -p download-docker-584504                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-584504 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ --download-only -p binary-mirror-598936 --alsologtostderr --binary-mirror http://127.0.0.1:40449 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-598936   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ -p binary-mirror-598936                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-598936   │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ addons  │ disable dashboard -p addons-603031                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ addons  │ enable dashboard -p addons-603031                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ start   │ -p addons-603031 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:12 UTC │
	│ addons  │ addons-603031 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ addons  │ addons-603031 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	│ addons  │ enable headlamp -p addons-603031 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-603031          │ jenkins │ v1.37.0 │ 12 Dec 25 20:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:09:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:09:58.990623  365855 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:09:58.990906  365855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:58.990937  365855 out.go:374] Setting ErrFile to fd 2...
	I1212 20:09:58.990956  365855 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:58.991246  365855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:09:58.991795  365855 out.go:368] Setting JSON to false
	I1212 20:09:58.992707  365855 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10351,"bootTime":1765559848,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:09:58.992805  365855 start.go:143] virtualization:  
	I1212 20:09:58.996671  365855 out.go:179] * [addons-603031] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:09:59.000456  365855 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:09:59.000838  365855 notify.go:221] Checking for updates...
	I1212 20:09:59.007141  365855 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:59.010314  365855 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:09:59.013485  365855 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:09:59.016755  365855 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:09:59.019861  365855 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:09:59.023048  365855 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:09:59.058796  365855 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:09:59.058968  365855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:59.114107  365855 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:59.104877509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:59.114214  365855 docker.go:319] overlay module found
	I1212 20:09:59.117447  365855 out.go:179] * Using the docker driver based on user configuration
	I1212 20:09:59.120243  365855 start.go:309] selected driver: docker
	I1212 20:09:59.120259  365855 start.go:927] validating driver "docker" against <nil>
	I1212 20:09:59.120272  365855 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:09:59.121068  365855 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:59.174868  365855 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:59.165444327 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:59.175024  365855 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:09:59.175239  365855 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:09:59.178157  365855 out.go:179] * Using Docker driver with root privileges
	I1212 20:09:59.181050  365855 cni.go:84] Creating CNI manager for ""
	I1212 20:09:59.181123  365855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:09:59.181140  365855 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:09:59.181232  365855 start.go:353] cluster config:
	{Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1212 20:09:59.184361  365855 out.go:179] * Starting "addons-603031" primary control-plane node in "addons-603031" cluster
	I1212 20:09:59.187159  365855 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:09:59.190185  365855 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:09:59.193136  365855 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:09:59.193196  365855 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 20:09:59.193209  365855 cache.go:65] Caching tarball of preloaded images
	I1212 20:09:59.193235  365855 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:09:59.193302  365855 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:09:59.193313  365855 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:09:59.193660  365855 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/config.json ...
	I1212 20:09:59.193691  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/config.json: {Name:mk36eaea1020099c8427d6188db2385f2d523dad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:59.209537  365855 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 20:09:59.209685  365855 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 20:09:59.209708  365855 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 20:09:59.209713  365855 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 20:09:59.209720  365855 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 20:09:59.209729  365855 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from local cache
	I1212 20:10:17.943401  365855 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 from cached tarball
	I1212 20:10:17.943446  365855 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:10:17.943500  365855 start.go:360] acquireMachinesLock for addons-603031: {Name:mkf4d918b051b7cae7b1771e0ec6d6c76a294488 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:10:17.943633  365855 start.go:364] duration metric: took 108.391µs to acquireMachinesLock for "addons-603031"
	I1212 20:10:17.943664  365855 start.go:93] Provisioning new machine with config: &{Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:17.943743  365855 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:10:17.947161  365855 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1212 20:10:17.947514  365855 start.go:159] libmachine.API.Create for "addons-603031" (driver="docker")
	I1212 20:10:17.947564  365855 client.go:173] LocalClient.Create starting
	I1212 20:10:17.947712  365855 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem
	I1212 20:10:18.703311  365855 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem
	I1212 20:10:19.006033  365855 cli_runner.go:164] Run: docker network inspect addons-603031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:10:19.022308  365855 cli_runner.go:211] docker network inspect addons-603031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:10:19.022403  365855 network_create.go:284] running [docker network inspect addons-603031] to gather additional debugging logs...
	I1212 20:10:19.022448  365855 cli_runner.go:164] Run: docker network inspect addons-603031
	W1212 20:10:19.039090  365855 cli_runner.go:211] docker network inspect addons-603031 returned with exit code 1
	I1212 20:10:19.039131  365855 network_create.go:287] error running [docker network inspect addons-603031]: docker network inspect addons-603031: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-603031 not found
	I1212 20:10:19.039147  365855 network_create.go:289] output of [docker network inspect addons-603031]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-603031 not found
	
	** /stderr **
	I1212 20:10:19.039246  365855 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:19.060273  365855 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bd93c0}
	I1212 20:10:19.060320  365855 network_create.go:124] attempt to create docker network addons-603031 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 20:10:19.060410  365855 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-603031 addons-603031
	I1212 20:10:19.122318  365855 network_create.go:108] docker network addons-603031 192.168.49.0/24 created
	I1212 20:10:19.122355  365855 kic.go:121] calculated static IP "192.168.49.2" for the "addons-603031" container
	I1212 20:10:19.122455  365855 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:10:19.137779  365855 cli_runner.go:164] Run: docker volume create addons-603031 --label name.minikube.sigs.k8s.io=addons-603031 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:10:19.155138  365855 oci.go:103] Successfully created a docker volume addons-603031
	I1212 20:10:19.155234  365855 cli_runner.go:164] Run: docker run --rm --name addons-603031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-603031 --entrypoint /usr/bin/test -v addons-603031:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:10:21.411656  365855 cli_runner.go:217] Completed: docker run --rm --name addons-603031-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-603031 --entrypoint /usr/bin/test -v addons-603031:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib: (2.256382323s)
	I1212 20:10:21.411692  365855 oci.go:107] Successfully prepared a docker volume addons-603031
	I1212 20:10:21.411737  365855 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:21.411755  365855 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:10:21.411826  365855 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-603031:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:10:25.432634  365855 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-603031:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.020761329s)
	I1212 20:10:25.432670  365855 kic.go:203] duration metric: took 4.020910336s to extract preloaded images to volume ...
	W1212 20:10:25.432830  365855 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 20:10:25.432951  365855 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:10:25.485721  365855 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-603031 --name addons-603031 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-603031 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-603031 --network addons-603031 --ip 192.168.49.2 --volume addons-603031:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:10:25.794767  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Running}}
	I1212 20:10:25.817350  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:25.840511  365855 cli_runner.go:164] Run: docker exec addons-603031 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:10:25.896178  365855 oci.go:144] the created container "addons-603031" has a running status.
	I1212 20:10:25.896211  365855 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa...
	I1212 20:10:26.437903  365855 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:10:26.457419  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:26.475522  365855 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:10:26.475547  365855 kic_runner.go:114] Args: [docker exec --privileged addons-603031 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:10:26.515657  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:26.533444  365855 machine.go:94] provisionDockerMachine start ...
	I1212 20:10:26.533556  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:26.550679  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:26.551022  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:26.551038  365855 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:10:26.551642  365855 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54096->127.0.0.1:33147: read: connection reset by peer
	I1212 20:10:29.704101  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-603031
	
	I1212 20:10:29.704127  365855 ubuntu.go:182] provisioning hostname "addons-603031"
	I1212 20:10:29.704200  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:29.722739  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:29.723052  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:29.723068  365855 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-603031 && echo "addons-603031" | sudo tee /etc/hostname
	I1212 20:10:29.882329  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-603031
	
	I1212 20:10:29.882406  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:29.901687  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:29.902021  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:29.902042  365855 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-603031' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-603031/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-603031' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:10:30.100250  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:10:30.100278  365855 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:10:30.100304  365855 ubuntu.go:190] setting up certificates
	I1212 20:10:30.100328  365855 provision.go:84] configureAuth start
	I1212 20:10:30.100424  365855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-603031
	I1212 20:10:30.128420  365855 provision.go:143] copyHostCerts
	I1212 20:10:30.128524  365855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:10:30.128674  365855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:10:30.128738  365855 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:10:30.128829  365855 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.addons-603031 san=[127.0.0.1 192.168.49.2 addons-603031 localhost minikube]
	I1212 20:10:30.725505  365855 provision.go:177] copyRemoteCerts
	I1212 20:10:30.725572  365855 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:10:30.725615  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:30.742595  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:30.847974  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:10:30.865319  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:10:30.883766  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:10:30.902419  365855 provision.go:87] duration metric: took 802.065687ms to configureAuth
	I1212 20:10:30.902452  365855 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:10:30.902653  365855 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:30.902763  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:30.921383  365855 main.go:143] libmachine: Using SSH client type: native
	I1212 20:10:30.921700  365855 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33147 <nil> <nil>}
	I1212 20:10:30.921719  365855 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:10:31.248237  365855 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:10:31.248303  365855 machine.go:97] duration metric: took 4.714835649s to provisionDockerMachine
	I1212 20:10:31.248321  365855 client.go:176] duration metric: took 13.300747585s to LocalClient.Create
	I1212 20:10:31.248341  365855 start.go:167] duration metric: took 13.300828932s to libmachine.API.Create "addons-603031"
	I1212 20:10:31.248354  365855 start.go:293] postStartSetup for "addons-603031" (driver="docker")
	I1212 20:10:31.248390  365855 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:10:31.248460  365855 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:10:31.248523  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.267907  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.372692  365855 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:10:31.375990  365855 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:10:31.376064  365855 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:10:31.376085  365855 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:10:31.376165  365855 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:10:31.376192  365855 start.go:296] duration metric: took 127.831819ms for postStartSetup
	I1212 20:10:31.376542  365855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-603031
	I1212 20:10:31.393877  365855 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/config.json ...
	I1212 20:10:31.394174  365855 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:10:31.394225  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.410976  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.513782  365855 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:10:31.518935  365855 start.go:128] duration metric: took 13.575174059s to createHost
	I1212 20:10:31.518973  365855 start.go:83] releasing machines lock for "addons-603031", held for 13.575316937s
	I1212 20:10:31.519057  365855 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-603031
	I1212 20:10:31.536638  365855 ssh_runner.go:195] Run: cat /version.json
	I1212 20:10:31.536702  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.536956  365855 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:10:31.537022  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:31.556141  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.565731  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:31.659948  365855 ssh_runner.go:195] Run: systemctl --version
	I1212 20:10:31.764213  365855 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:10:31.799054  365855 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:10:31.803493  365855 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:10:31.803573  365855 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:10:31.832962  365855 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 20:10:31.833027  365855 start.go:496] detecting cgroup driver to use...
	I1212 20:10:31.833068  365855 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:10:31.833126  365855 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:10:31.851056  365855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:10:31.864808  365855 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:10:31.864912  365855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:10:31.881912  365855 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:10:31.900511  365855 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:10:32.019285  365855 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:10:32.143380  365855 docker.go:234] disabling docker service ...
	I1212 20:10:32.143512  365855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:10:32.165229  365855 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:10:32.178573  365855 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:10:32.292077  365855 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:10:32.413572  365855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:10:32.426872  365855 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:10:32.442316  365855 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:10:32.442407  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.452177  365855 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:10:32.452290  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.463077  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.473062  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.483000  365855 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:10:32.492774  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.502544  365855 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.517663  365855 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:10:32.526573  365855 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:10:32.534411  365855 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:10:32.542257  365855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:32.657040  365855 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:10:32.837669  365855 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:10:32.837770  365855 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:10:32.841638  365855 start.go:564] Will wait 60s for crictl version
	I1212 20:10:32.841706  365855 ssh_runner.go:195] Run: which crictl
	I1212 20:10:32.845133  365855 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:10:32.868395  365855 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:10:32.868558  365855 ssh_runner.go:195] Run: crio --version
	I1212 20:10:32.899162  365855 ssh_runner.go:195] Run: crio --version
	I1212 20:10:32.930305  365855 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:10:32.932961  365855 cli_runner.go:164] Run: docker network inspect addons-603031 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:10:32.949236  365855 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:10:32.953336  365855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:32.963678  365855 kubeadm.go:884] updating cluster {Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:10:32.963801  365855 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:10:32.963866  365855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:32.998573  365855 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:32.998597  365855 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:10:32.998658  365855 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:10:33.042268  365855 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:10:33.042294  365855 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:10:33.042302  365855 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 20:10:33.042391  365855 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-603031 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:10:33.042477  365855 ssh_runner.go:195] Run: crio config
	I1212 20:10:33.111907  365855 cni.go:84] Creating CNI manager for ""
	I1212 20:10:33.111932  365855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:33.111951  365855 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:10:33.111974  365855 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-603031 NodeName:addons-603031 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:10:33.112112  365855 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-603031"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:10:33.112191  365855 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:10:33.120501  365855 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:10:33.120620  365855 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:10:33.128779  365855 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:10:33.142833  365855 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:10:33.155922  365855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I1212 20:10:33.168184  365855 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:10:33.171904  365855 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:10:33.181663  365855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:33.288898  365855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:33.305957  365855 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031 for IP: 192.168.49.2
	I1212 20:10:33.305982  365855 certs.go:195] generating shared ca certs ...
	I1212 20:10:33.306013  365855 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.306144  365855 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:10:33.732565  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt ...
	I1212 20:10:33.732600  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt: {Name:mk136a4872d4735b1a51b53120b75a5ccade3b84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.732798  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key ...
	I1212 20:10:33.732812  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key: {Name:mkd182407294285cd09f957d2c29d8a2f449bcba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.732903  365855 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:10:33.856698  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt ...
	I1212 20:10:33.856725  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt: {Name:mkacf2c7f9ae40d6aaec7f7a170dec87e851d722 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.856891  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key ...
	I1212 20:10:33.856905  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key: {Name:mkb2be966cf482840e728784dfb858a82dbe8b45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:33.856984  365855 certs.go:257] generating profile certs ...
	I1212 20:10:33.857042  365855 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.key
	I1212 20:10:33.857061  365855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt with IP's: []
	I1212 20:10:34.089489  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt ...
	I1212 20:10:34.089526  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: {Name:mk837e5f2ccbdfb557804fd902094182abc3757a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.089721  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.key ...
	I1212 20:10:34.089735  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.key: {Name:mk53aa0be088e657c02e69186bdee9e510afb09d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.089827  365855 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408
	I1212 20:10:34.089847  365855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 20:10:34.689182  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408 ...
	I1212 20:10:34.689216  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408: {Name:mkdef7807c8cc1f6201a5888891951c2c01bf017 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.689401  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408 ...
	I1212 20:10:34.689419  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408: {Name:mkebe7d9b8692e69282594ba9f0372c88639708d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.689493  365855 certs.go:382] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt.b7d6e408 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt
	I1212 20:10:34.689579  365855 certs.go:386] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key.b7d6e408 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key
	I1212 20:10:34.689634  365855 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key
	I1212 20:10:34.689655  365855 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt with IP's: []
	I1212 20:10:34.915927  365855 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt ...
	I1212 20:10:34.915959  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt: {Name:mkf6d1d069059ae3210ccaa8b5c6e4f517bd9d57 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.916145  365855 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key ...
	I1212 20:10:34.916159  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key: {Name:mk123dc9296ff8e8688845e2505214d0152caaf6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:34.916347  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:10:34.916418  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:10:34.916452  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:10:34.916484  365855 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:10:34.917043  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:10:34.936782  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:10:34.959031  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:10:34.978963  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:10:34.998783  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 20:10:35.027942  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:10:35.049790  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:10:35.070964  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:10:35.093715  365855 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:10:35.114671  365855 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:10:35.130080  365855 ssh_runner.go:195] Run: openssl version
	I1212 20:10:35.136995  365855 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.145394  365855 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:10:35.154673  365855 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.159449  365855 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.159523  365855 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:10:35.202467  365855 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:10:35.211157  365855 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:10:35.219756  365855 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:10:35.223704  365855 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:10:35.223777  365855 kubeadm.go:401] StartCluster: {Name:addons-603031 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:addons-603031 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:10:35.223889  365855 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:10:35.223956  365855 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:10:35.253734  365855 cri.go:89] found id: ""
	I1212 20:10:35.253876  365855 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:10:35.262834  365855 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:10:35.271671  365855 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:10:35.271748  365855 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:10:35.280801  365855 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:10:35.280825  365855 kubeadm.go:158] found existing configuration files:
	
	I1212 20:10:35.280891  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 20:10:35.289757  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:10:35.289835  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:10:35.298264  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 20:10:35.306851  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:10:35.306930  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:10:35.315091  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 20:10:35.323637  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:10:35.323708  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:10:35.331554  365855 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 20:10:35.339440  365855 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:10:35.339512  365855 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:10:35.347505  365855 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:10:35.417395  365855 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1212 20:10:35.417718  365855 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:10:35.485500  365855 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:10:53.066812  365855 kubeadm.go:319] [init] Using Kubernetes version: v1.34.2
	I1212 20:10:53.066874  365855 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:10:53.066963  365855 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:10:53.067019  365855 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:10:53.067057  365855 kubeadm.go:319] OS: Linux
	I1212 20:10:53.067103  365855 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:10:53.067153  365855 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:10:53.067200  365855 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:10:53.067249  365855 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:10:53.067296  365855 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:10:53.067349  365855 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:10:53.067396  365855 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:10:53.067459  365855 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:10:53.067508  365855 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:10:53.067581  365855 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:10:53.067672  365855 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:10:53.067758  365855 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:10:53.067819  365855 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:10:53.071037  365855 out.go:252]   - Generating certificates and keys ...
	I1212 20:10:53.071142  365855 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:10:53.071216  365855 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:10:53.071289  365855 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:10:53.071350  365855 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:10:53.071415  365855 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:10:53.071476  365855 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:10:53.071534  365855 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:10:53.071654  365855 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-603031 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 20:10:53.071711  365855 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:10:53.071840  365855 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-603031 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 20:10:53.071920  365855 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:10:53.071988  365855 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:10:53.072036  365855 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:10:53.072095  365855 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:10:53.072170  365855 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:10:53.072231  365855 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:10:53.072290  365855 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:10:53.072357  365855 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:10:53.072441  365855 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:10:53.072529  365855 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:10:53.072666  365855 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:10:53.075686  365855 out.go:252]   - Booting up control plane ...
	I1212 20:10:53.075847  365855 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:10:53.075934  365855 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:10:53.076006  365855 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:10:53.076128  365855 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:10:53.076229  365855 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:10:53.076341  365855 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:10:53.076469  365855 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:10:53.076512  365855 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:10:53.076651  365855 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:10:53.076762  365855 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:10:53.076825  365855 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.004714464s
	I1212 20:10:53.076922  365855 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1212 20:10:53.077007  365855 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1212 20:10:53.077102  365855 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1212 20:10:53.077188  365855 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1212 20:10:53.077269  365855 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.715128518s
	I1212 20:10:53.077343  365855 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.682716309s
	I1212 20:10:53.077420  365855 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.501565858s
	I1212 20:10:53.077541  365855 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1212 20:10:53.077676  365855 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1212 20:10:53.077738  365855 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1212 20:10:53.077955  365855 kubeadm.go:319] [mark-control-plane] Marking the node addons-603031 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1212 20:10:53.078015  365855 kubeadm.go:319] [bootstrap-token] Using token: nbgdzp.csbyudvbvi3h3xct
	I1212 20:10:53.081077  365855 out.go:252]   - Configuring RBAC rules ...
	I1212 20:10:53.081222  365855 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1212 20:10:53.081315  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1212 20:10:53.081522  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1212 20:10:53.081706  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1212 20:10:53.081836  365855 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1212 20:10:53.081929  365855 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1212 20:10:53.082045  365855 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1212 20:10:53.082093  365855 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1212 20:10:53.082142  365855 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1212 20:10:53.082153  365855 kubeadm.go:319] 
	I1212 20:10:53.082210  365855 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1212 20:10:53.082218  365855 kubeadm.go:319] 
	I1212 20:10:53.082290  365855 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1212 20:10:53.082297  365855 kubeadm.go:319] 
	I1212 20:10:53.082321  365855 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1212 20:10:53.082380  365855 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1212 20:10:53.082431  365855 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1212 20:10:53.082440  365855 kubeadm.go:319] 
	I1212 20:10:53.082491  365855 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1212 20:10:53.082498  365855 kubeadm.go:319] 
	I1212 20:10:53.082543  365855 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1212 20:10:53.082550  365855 kubeadm.go:319] 
	I1212 20:10:53.082599  365855 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1212 20:10:53.082686  365855 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1212 20:10:53.082761  365855 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1212 20:10:53.082772  365855 kubeadm.go:319] 
	I1212 20:10:53.082866  365855 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1212 20:10:53.082960  365855 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1212 20:10:53.082969  365855 kubeadm.go:319] 
	I1212 20:10:53.083051  365855 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token nbgdzp.csbyudvbvi3h3xct \
	I1212 20:10:53.083167  365855 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:adaa875fcacb3059f0eec2e4962c24570f977d0d03cb0131f0cb68ee03e4f578 \
	I1212 20:10:53.083195  365855 kubeadm.go:319] 	--control-plane 
	I1212 20:10:53.083203  365855 kubeadm.go:319] 
	I1212 20:10:53.083294  365855 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1212 20:10:53.083301  365855 kubeadm.go:319] 
	I1212 20:10:53.083384  365855 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token nbgdzp.csbyudvbvi3h3xct \
	I1212 20:10:53.083512  365855 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:adaa875fcacb3059f0eec2e4962c24570f977d0d03cb0131f0cb68ee03e4f578 
	I1212 20:10:53.083532  365855 cni.go:84] Creating CNI manager for ""
	I1212 20:10:53.083560  365855 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:10:53.086856  365855 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1212 20:10:53.089890  365855 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1212 20:10:53.094652  365855 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.2/kubectl ...
	I1212 20:10:53.094677  365855 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1212 20:10:53.111432  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1212 20:10:53.434925  365855 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1212 20:10:53.435048  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:53.435128  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-603031 minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300 minikube.k8s.io/name=addons-603031 minikube.k8s.io/primary=true
	I1212 20:10:53.457994  365855 ops.go:34] apiserver oom_adj: -16
	I1212 20:10:53.649083  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.149314  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:54.649623  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.149315  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:55.649454  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.149414  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.649563  365855 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1212 20:10:56.738242  365855 kubeadm.go:1114] duration metric: took 3.303234942s to wait for elevateKubeSystemPrivileges
	I1212 20:10:56.738289  365855 kubeadm.go:403] duration metric: took 21.514533948s to StartCluster
	I1212 20:10:56.738308  365855 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:56.738459  365855 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:10:56.738921  365855 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:10:56.739122  365855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1212 20:10:56.739190  365855 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:10:56.739400  365855 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:56.739450  365855 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1212 20:10:56.739538  365855 addons.go:70] Setting yakd=true in profile "addons-603031"
	I1212 20:10:56.739552  365855 addons.go:239] Setting addon yakd=true in "addons-603031"
	I1212 20:10:56.739578  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.740063  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.740550  365855 addons.go:70] Setting metrics-server=true in profile "addons-603031"
	I1212 20:10:56.740577  365855 addons.go:239] Setting addon metrics-server=true in "addons-603031"
	I1212 20:10:56.740602  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.740675  365855 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-603031"
	I1212 20:10:56.740725  365855 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-603031"
	I1212 20:10:56.740777  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.741022  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.741368  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.744704  365855 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-603031"
	I1212 20:10:56.744742  365855 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-603031"
	I1212 20:10:56.744776  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.745235  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.747386  365855 addons.go:70] Setting registry=true in profile "addons-603031"
	I1212 20:10:56.747485  365855 addons.go:239] Setting addon registry=true in "addons-603031"
	I1212 20:10:56.747552  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.748197  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.749970  365855 addons.go:70] Setting cloud-spanner=true in profile "addons-603031"
	I1212 20:10:56.750004  365855 addons.go:239] Setting addon cloud-spanner=true in "addons-603031"
	I1212 20:10:56.750054  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.750622  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.761748  365855 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-603031"
	I1212 20:10:56.761817  365855 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-603031"
	I1212 20:10:56.761847  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.762320  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.765518  365855 addons.go:70] Setting registry-creds=true in profile "addons-603031"
	I1212 20:10:56.765613  365855 addons.go:239] Setting addon registry-creds=true in "addons-603031"
	I1212 20:10:56.765678  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.766192  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.779498  365855 addons.go:70] Setting default-storageclass=true in profile "addons-603031"
	I1212 20:10:56.779534  365855 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-603031"
	I1212 20:10:56.780490  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.780680  365855 addons.go:70] Setting storage-provisioner=true in profile "addons-603031"
	I1212 20:10:56.780710  365855 addons.go:239] Setting addon storage-provisioner=true in "addons-603031"
	I1212 20:10:56.780769  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.782781  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.788192  365855 addons.go:70] Setting gcp-auth=true in profile "addons-603031"
	I1212 20:10:56.788277  365855 mustload.go:66] Loading cluster: addons-603031
	I1212 20:10:56.788985  365855 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:10:56.789494  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.794098  365855 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-603031"
	I1212 20:10:56.794185  365855 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-603031"
	I1212 20:10:56.794561  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.815570  365855 addons.go:70] Setting ingress=true in profile "addons-603031"
	I1212 20:10:56.815608  365855 addons.go:239] Setting addon ingress=true in "addons-603031"
	I1212 20:10:56.815663  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.816147  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.818758  365855 addons.go:70] Setting volcano=true in profile "addons-603031"
	I1212 20:10:56.818791  365855 addons.go:239] Setting addon volcano=true in "addons-603031"
	I1212 20:10:56.818875  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.819640  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.828424  365855 addons.go:70] Setting ingress-dns=true in profile "addons-603031"
	I1212 20:10:56.828458  365855 addons.go:239] Setting addon ingress-dns=true in "addons-603031"
	I1212 20:10:56.828504  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.828977  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.837153  365855 addons.go:70] Setting volumesnapshots=true in profile "addons-603031"
	I1212 20:10:56.837197  365855 addons.go:239] Setting addon volumesnapshots=true in "addons-603031"
	I1212 20:10:56.837235  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.837733  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.839815  365855 addons.go:70] Setting inspektor-gadget=true in profile "addons-603031"
	I1212 20:10:56.839880  365855 addons.go:239] Setting addon inspektor-gadget=true in "addons-603031"
	I1212 20:10:56.839922  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.849775  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.885057  365855 out.go:179] * Verifying Kubernetes components...
	I1212 20:10:56.902244  365855 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1212 20:10:56.905162  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1212 20:10:56.905189  365855 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1212 20:10:56.905258  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.907361  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.922496  365855 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1212 20:10:56.922895  365855 addons.go:239] Setting addon default-storageclass=true in "addons-603031"
	I1212 20:10:56.922925  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.923439  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.928810  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1212 20:10:56.928849  365855 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1212 20:10:56.928914  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.940574  365855 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:10:56.945143  365855 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1212 20:10:56.945210  365855 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1212 20:10:56.946572  365855 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1212 20:10:56.956333  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1212 20:10:56.958360  365855 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 20:10:56.958385  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1212 20:10:56.958456  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.946920  365855 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1212 20:10:56.948396  365855 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-603031"
	I1212 20:10:56.959241  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:10:56.959727  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:10:56.976685  365855 out.go:179]   - Using image docker.io/registry:3.0.0
	I1212 20:10:56.981626  365855 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1212 20:10:56.984557  365855 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 20:10:56.984582  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1212 20:10:56.984652  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:56.984869  365855 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1212 20:10:56.991047  365855 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 20:10:56.991120  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1212 20:10:56.991218  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.007940  365855 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1212 20:10:57.008019  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1212 20:10:57.008124  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.009488  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1212 20:10:57.016210  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1212 20:10:57.019523  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1212 20:10:57.020031  365855 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1212 20:10:57.020052  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1212 20:10:57.020116  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.028545  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 20:10:57.030370  365855 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 20:10:57.030396  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1212 20:10:57.030472  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	W1212 20:10:57.067072  365855 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1212 20:10:57.067679  365855 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:10:57.090417  365855 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:57.090499  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:10:57.090594  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.092088  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1212 20:10:57.118407  365855 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1212 20:10:57.085204  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1212 20:10:57.122227  365855 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1212 20:10:57.122303  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1212 20:10:57.122405  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.124760  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1212 20:10:57.124839  365855 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1212 20:10:57.124952  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.147196  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1212 20:10:57.148161  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.161133  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 20:10:57.161907  365855 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:57.161923  365855 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:10:57.161987  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.164752  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.165939  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1212 20:10:57.166306  365855 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 20:10:57.166323  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1212 20:10:57.166385  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.195066  365855 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1212 20:10:57.195163  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1212 20:10:57.201446  365855 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1212 20:10:57.201643  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.202220  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.204533  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1212 20:10:57.204555  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1212 20:10:57.204621  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.210623  365855 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1212 20:10:57.211806  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.220503  365855 out.go:179]   - Using image docker.io/busybox:stable
	I1212 20:10:57.223554  365855 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 20:10:57.223581  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1212 20:10:57.223650  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:10:57.239669  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.244881  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.270378  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.324536  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.330297  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.331859  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	W1212 20:10:57.343478  365855 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1212 20:10:57.343525  365855 retry.go:31] will retry after 274.899991ms: ssh: handshake failed: EOF
	I1212 20:10:57.351868  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.370217  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.371341  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.374424  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:10:57.406130  365855 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:10:57.772153  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1212 20:10:57.772223  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1212 20:10:57.957620  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1212 20:10:57.957647  365855 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1212 20:10:57.960219  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1212 20:10:57.960244  365855 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1212 20:10:57.990165  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1212 20:10:58.018628  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1212 20:10:58.050127  365855 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 20:10:58.050156  365855 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1212 20:10:58.075820  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1212 20:10:58.091009  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1212 20:10:58.091035  365855 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1212 20:10:58.102794  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1212 20:10:58.106703  365855 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1212 20:10:58.106732  365855 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1212 20:10:58.107069  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1212 20:10:58.121015  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1212 20:10:58.123624  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:10:58.129266  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1212 20:10:58.129294  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1212 20:10:58.160939  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1212 20:10:58.172010  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1212 20:10:58.213490  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:10:58.228995  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1212 20:10:58.243790  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1212 20:10:58.243813  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1212 20:10:58.251003  365855 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1212 20:10:58.251029  365855 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1212 20:10:58.278837  365855 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1212 20:10:58.278863  365855 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1212 20:10:58.301221  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1212 20:10:58.301246  365855 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1212 20:10:58.386264  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1212 20:10:58.386291  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1212 20:10:58.434904  365855 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1212 20:10:58.434965  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1212 20:10:58.453613  365855 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1212 20:10:58.453677  365855 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1212 20:10:58.475671  365855 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1212 20:10:58.475747  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1212 20:10:58.549894  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1212 20:10:58.549968  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1212 20:10:58.635236  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1212 20:10:58.638604  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1212 20:10:58.672756  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1212 20:10:58.672855  365855 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1212 20:10:58.674997  365855 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1212 20:10:58.675073  365855 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1212 20:10:58.898525  365855 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.492361752s)
	I1212 20:10:58.899363  365855 node_ready.go:35] waiting up to 6m0s for node "addons-603031" to be "Ready" ...
	I1212 20:10:58.899544  365855 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.704449089s)
	I1212 20:10:58.899595  365855 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1212 20:10:58.907025  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1212 20:10:58.907047  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1212 20:10:58.943262  365855 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 20:10:58.943334  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1212 20:10:59.314811  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1212 20:10:59.314887  365855 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1212 20:10:59.394694  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 20:10:59.405349  365855 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-603031" context rescaled to 1 replicas
	I1212 20:10:59.568870  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (1.578665923s)
	I1212 20:10:59.649011  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1212 20:10:59.649080  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1212 20:10:59.872795  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1212 20:10:59.872866  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1212 20:11:00.193043  365855 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1212 20:11:00.193143  365855 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1212 20:11:00.425351  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	W1212 20:11:00.913136  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:01.950292  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.931620877s)
	I1212 20:11:02.135635  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.032800939s)
	I1212 20:11:02.135734  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.028643215s)
	I1212 20:11:02.135818  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (4.014778214s)
	I1212 20:11:02.135890  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.012244401s)
	I1212 20:11:02.136250  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.060402332s)
	I1212 20:11:02.172894  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.011906173s)
	I1212 20:11:02.172930  365855 addons.go:495] Verifying addon metrics-server=true in "addons-603031"
	I1212 20:11:02.236320  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (4.064268754s)
	I1212 20:11:02.236393  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.022879137s)
	I1212 20:11:03.253252  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.024217684s)
	I1212 20:11:03.253286  365855 addons.go:495] Verifying addon ingress=true in "addons-603031"
	I1212 20:11:03.253493  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.618174383s)
	I1212 20:11:03.253508  365855 addons.go:495] Verifying addon registry=true in "addons-603031"
	I1212 20:11:03.253808  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.61512753s)
	I1212 20:11:03.254154  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.859389605s)
	W1212 20:11:03.254185  365855 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 20:11:03.254204  365855 retry.go:31] will retry after 278.257898ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1212 20:11:03.256670  365855 out.go:179] * Verifying ingress addon...
	I1212 20:11:03.258641  365855 out.go:179] * Verifying registry addon...
	I1212 20:11:03.258742  365855 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-603031 service yakd-dashboard -n yakd-dashboard
	
	I1212 20:11:03.261546  365855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1212 20:11:03.261556  365855 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1212 20:11:03.275805  365855 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 20:11:03.275826  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:03.278156  365855 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1212 20:11:03.278176  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 20:11:03.403996  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:03.532811  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1212 20:11:03.564443  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.139041147s)
	I1212 20:11:03.564486  365855 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-603031"
	I1212 20:11:03.567474  365855 out.go:179] * Verifying csi-hostpath-driver addon...
	I1212 20:11:03.571159  365855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1212 20:11:03.584063  365855 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 20:11:03.584085  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:03.766281  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:03.766683  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:04.075436  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:04.265378  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:04.266056  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:04.575542  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:04.637123  365855 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1212 20:11:04.637207  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:11:04.654197  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:11:04.766536  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:04.766879  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:04.778913  365855 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1212 20:11:04.792312  365855 addons.go:239] Setting addon gcp-auth=true in "addons-603031"
	I1212 20:11:04.792412  365855 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:11:04.792882  365855 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:11:04.810730  365855 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1212 20:11:04.810810  365855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:11:04.828440  365855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:11:05.078709  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:05.265150  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:05.265395  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:05.574496  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:05.764685  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:05.765091  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1212 20:11:05.905187  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:06.083032  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:06.267622  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:06.268131  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:06.276442  365855 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.743576352s)
	I1212 20:11:06.276516  365855 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.465704238s)
	I1212 20:11:06.279953  365855 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1212 20:11:06.282864  365855 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1212 20:11:06.285797  365855 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1212 20:11:06.285830  365855 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1212 20:11:06.299425  365855 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1212 20:11:06.299469  365855 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1212 20:11:06.315308  365855 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 20:11:06.315383  365855 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1212 20:11:06.329902  365855 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1212 20:11:06.575176  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:06.767858  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:06.769632  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:06.848046  365855 addons.go:495] Verifying addon gcp-auth=true in "addons-603031"
	I1212 20:11:06.853005  365855 out.go:179] * Verifying gcp-auth addon...
	I1212 20:11:06.855728  365855 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1212 20:11:06.866842  365855 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1212 20:11:06.866908  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:07.077741  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:07.265064  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:07.265362  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:07.359314  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:07.574475  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:07.765935  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:07.766304  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:07.859118  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:08.078903  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:08.264899  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:08.265607  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:08.359317  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:08.403290  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:08.574675  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:08.765444  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:08.765659  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:08.859897  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:09.075561  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:09.266525  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:09.266639  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:09.359868  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:09.574566  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:09.765056  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:09.765259  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:09.859326  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:10.076152  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:10.272191  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:10.272978  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:10.358817  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:10.575297  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:10.765668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:10.765895  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:10.859045  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:10.903279  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:11.079219  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:11.265600  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:11.265781  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:11.358776  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:11.575887  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:11.765476  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:11.765681  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:11.858647  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:12.078611  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:12.264887  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:12.265517  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:12.359512  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:12.575311  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:12.765983  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:12.766268  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:12.859469  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:13.079962  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:13.266430  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:13.266621  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:13.360481  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:13.402479  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:13.574352  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:13.766420  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:13.766892  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:13.858989  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:14.079836  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:14.265215  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:14.265588  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:14.359291  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:14.574614  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:14.764785  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:14.765099  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:14.858948  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:15.078542  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:15.264943  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:15.265200  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:15.358906  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:15.403150  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:15.574221  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:15.765752  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:15.766260  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:15.859160  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:16.078853  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:16.265175  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:16.265314  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:16.359095  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:16.575204  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:16.765494  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:16.765813  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:16.858940  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:17.078389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:17.265635  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:17.266038  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:17.358811  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:17.574756  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:17.765515  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:17.765703  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:17.859646  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:17.902233  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:18.077918  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:18.265886  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:18.266307  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:18.358908  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:18.575134  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:18.765352  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:18.765528  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:18.859205  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:19.077975  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:19.266029  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:19.266184  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:19.359074  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:19.574959  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:19.765821  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:19.765959  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:19.858782  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:19.902560  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:20.078461  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:20.265943  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:20.266123  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:20.359074  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:20.576963  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:20.764921  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:20.765883  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:20.858676  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:21.077999  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:21.265704  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:21.265847  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:21.358846  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:21.574933  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:21.765217  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:21.765377  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:21.859482  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:22.080138  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:22.265427  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:22.265656  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:22.359510  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:22.402476  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:22.574879  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:22.765481  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:22.765776  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:22.859464  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:23.078422  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:23.266534  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:23.266731  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:23.359491  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:23.574071  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:23.765668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:23.765867  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:23.858871  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:24.078320  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:24.264959  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:24.265401  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:24.359394  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:24.402772  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:24.575165  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:24.765695  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:24.765820  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:24.862932  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:25.079437  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:25.268945  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:25.269223  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:25.358903  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:25.577351  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:25.764719  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:25.765248  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:25.859076  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:26.077580  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:26.265330  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:26.265768  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:26.359576  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:26.574961  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:26.765492  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:26.765632  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:26.859302  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:26.902948  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:27.077984  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:27.265497  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:27.265649  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:27.359602  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:27.574572  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:27.765005  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:27.765516  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:27.859395  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:28.078024  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:28.265346  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:28.265649  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:28.359191  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:28.573958  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:28.765213  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:28.765432  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:28.859531  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:29.077637  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:29.264558  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:29.264921  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:29.358521  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:29.402030  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:29.573884  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:29.765595  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:29.766199  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:29.859208  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:30.079327  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:30.265417  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:30.265733  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:30.360511  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:30.574054  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:30.765399  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:30.765758  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:30.859642  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:31.076104  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:31.265682  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:31.265897  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:31.358828  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:31.402986  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:31.575550  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:31.765488  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:31.765815  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:31.858566  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:32.077296  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:32.265616  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:32.265834  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:32.358830  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:32.574306  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:32.765407  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:32.765828  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:32.858348  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:33.077262  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:33.265570  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:33.265711  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:33.358955  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:33.574420  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:33.764729  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:33.764916  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:33.859191  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:33.902837  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:34.078469  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:34.264737  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:34.264931  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:34.358890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:34.574587  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:34.764890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:34.765131  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:34.858932  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:35.079104  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:35.266075  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:35.266146  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:35.359039  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:35.574836  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:35.764967  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:35.765602  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:35.859333  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:35.903007  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:36.078237  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:36.265815  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:36.265888  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:36.358778  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:36.574351  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:36.764536  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:36.764922  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:36.858513  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:37.077783  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:37.265079  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:37.265272  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:37.359058  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:37.574706  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:37.764848  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:37.764879  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:37.859506  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:38.078384  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:38.265652  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:38.265805  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:38.358703  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1212 20:11:38.402569  365855 node_ready.go:57] node "addons-603031" has "Ready":"False" status (will retry)
	I1212 20:11:38.575172  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:38.765400  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:38.765658  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:38.859558  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:39.077881  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:39.265090  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:39.265201  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:39.358890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:39.574777  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:39.765390  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:39.765506  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:39.870872  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:39.949667  365855 node_ready.go:49] node "addons-603031" is "Ready"
	I1212 20:11:39.949698  365855 node_ready.go:38] duration metric: took 41.050250733s for node "addons-603031" to be "Ready" ...
	I1212 20:11:39.949714  365855 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:11:39.949771  365855 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:11:39.975522  365855 api_server.go:72] duration metric: took 43.23628628s to wait for apiserver process to appear ...
	I1212 20:11:39.975552  365855 api_server.go:88] waiting for apiserver healthz status ...
	I1212 20:11:39.975574  365855 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 20:11:40.034712  365855 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 20:11:40.038026  365855 api_server.go:141] control plane version: v1.34.2
	I1212 20:11:40.038061  365855 api_server.go:131] duration metric: took 62.500544ms to wait for apiserver health ...
	I1212 20:11:40.038073  365855 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 20:11:40.052006  365855 system_pods.go:59] 19 kube-system pods found
	I1212 20:11:40.052045  365855 system_pods.go:61] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending
	I1212 20:11:40.052052  365855 system_pods.go:61] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.052057  365855 system_pods.go:61] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending
	I1212 20:11:40.052062  365855 system_pods.go:61] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending
	I1212 20:11:40.052066  365855 system_pods.go:61] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.052070  365855 system_pods.go:61] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.052074  365855 system_pods.go:61] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.052078  365855 system_pods.go:61] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.052082  365855 system_pods.go:61] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending
	I1212 20:11:40.052085  365855 system_pods.go:61] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.052089  365855 system_pods.go:61] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.052094  365855 system_pods.go:61] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending
	I1212 20:11:40.052104  365855 system_pods.go:61] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.052108  365855 system_pods.go:61] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending
	I1212 20:11:40.052119  365855 system_pods.go:61] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.052124  365855 system_pods.go:61] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending
	I1212 20:11:40.052132  365855 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending
	I1212 20:11:40.052138  365855 system_pods.go:61] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending
	I1212 20:11:40.052141  365855 system_pods.go:61] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending
	I1212 20:11:40.052149  365855 system_pods.go:74] duration metric: took 14.070015ms to wait for pod list to return data ...
	I1212 20:11:40.052160  365855 default_sa.go:34] waiting for default service account to be created ...
	I1212 20:11:40.059155  365855 default_sa.go:45] found service account: "default"
	I1212 20:11:40.059233  365855 default_sa.go:55] duration metric: took 7.065802ms for default service account to be created ...
	I1212 20:11:40.059262  365855 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 20:11:40.133016  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:40.133099  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending
	I1212 20:11:40.133122  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.133143  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending
	I1212 20:11:40.133182  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending
	I1212 20:11:40.133208  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.133229  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.133249  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.133271  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.133304  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending
	I1212 20:11:40.133325  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.133344  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.133366  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending
	I1212 20:11:40.133401  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.133421  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending
	I1212 20:11:40.133443  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.133474  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending
	I1212 20:11:40.133497  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending
	I1212 20:11:40.133516  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending
	I1212 20:11:40.133536  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:40.133581  365855 retry.go:31] will retry after 262.633772ms: missing components: kube-dns
	I1212 20:11:40.133848  365855 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1212 20:11:40.133888  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:40.305112  365855 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1212 20:11:40.305372  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:40.305351  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:40.363548  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:40.414743  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:40.414823  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:40.414845  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.414867  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending
	I1212 20:11:40.414901  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending
	I1212 20:11:40.414927  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.414948  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.414968  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.414988  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.415023  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 20:11:40.415041  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.415059  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.415081  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 20:11:40.415114  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.415131  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending
	I1212 20:11:40.415151  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.415170  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending
	I1212 20:11:40.415201  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending
	I1212 20:11:40.415228  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending
	I1212 20:11:40.415249  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:40.415281  365855 retry.go:31] will retry after 351.351313ms: missing components: kube-dns
	I1212 20:11:40.576008  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:40.832133  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:40.851789  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:40.867794  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:40.867839  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:40.867847  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending
	I1212 20:11:40.867855  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 20:11:40.867862  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 20:11:40.867866  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:40.867871  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:40.867876  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:40.867880  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:40.867891  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 20:11:40.867896  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:40.867916  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:40.867922  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 20:11:40.867933  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending
	I1212 20:11:40.867940  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 20:11:40.867946  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:40.867956  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 20:11:40.867964  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:40.867971  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:40.867979  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 20:11:40.868001  365855 retry.go:31] will retry after 475.387205ms: missing components: kube-dns
	I1212 20:11:40.898935  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:41.097763  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:41.266783  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:41.266897  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:41.368912  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:41.369492  365855 system_pods.go:86] 19 kube-system pods found
	I1212 20:11:41.369519  365855 system_pods.go:89] "coredns-66bc5c9577-9rqzw" [057d8772-1bb0-492d-9aa7-6363449d3dff] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 20:11:41.369550  365855 system_pods.go:89] "csi-hostpath-attacher-0" [37fb58df-d546-442b-a670-5bc0591cb463] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1212 20:11:41.369568  365855 system_pods.go:89] "csi-hostpath-resizer-0" [58e72845-9acf-4248-98fe-d2148d2c6241] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1212 20:11:41.369575  365855 system_pods.go:89] "csi-hostpathplugin-5b869" [facbae11-2217-40f3-8871-eb549588ac4c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1212 20:11:41.369584  365855 system_pods.go:89] "etcd-addons-603031" [655ab573-e8d3-4059-bc18-cbff1d8c1470] Running
	I1212 20:11:41.369588  365855 system_pods.go:89] "kindnet-2dtkn" [f011e7d0-45c3-4bda-bdb6-4714cb7ab310] Running
	I1212 20:11:41.369593  365855 system_pods.go:89] "kube-apiserver-addons-603031" [8d5df7d6-482a-4f34-8c46-7e17b01e4ea9] Running
	I1212 20:11:41.369597  365855 system_pods.go:89] "kube-controller-manager-addons-603031" [136e1ec3-e803-4b8f-b19d-50b6a6142cf4] Running
	I1212 20:11:41.369609  365855 system_pods.go:89] "kube-ingress-dns-minikube" [716456fd-092e-4201-b01a-dee91e8a3804] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1212 20:11:41.369613  365855 system_pods.go:89] "kube-proxy-6c94h" [e813b5e8-481e-4d67-9ac6-44618fff8d3e] Running
	I1212 20:11:41.369624  365855 system_pods.go:89] "kube-scheduler-addons-603031" [a3da1652-6b66-47ea-8874-4a0bb4bc1e62] Running
	I1212 20:11:41.369633  365855 system_pods.go:89] "metrics-server-85b7d694d7-q8cmr" [b3fa48fc-5306-430b-af93-79d95d6670a5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1212 20:11:41.369646  365855 system_pods.go:89] "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1212 20:11:41.369667  365855 system_pods.go:89] "registry-6b586f9694-7qdmt" [09f30429-06eb-4593-bcd2-9c94c4d11c6b] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1212 20:11:41.369674  365855 system_pods.go:89] "registry-creds-764b6fb674-7zll2" [3e7c826f-448c-4599-a385-861be612bf36] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1212 20:11:41.369685  365855 system_pods.go:89] "registry-proxy-2ppkm" [e7947ffc-cabe-426c-addf-dad613ced47d] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1212 20:11:41.369698  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5jcb6" [6ef718fa-1c34-41d2-9b8f-a3fdb531333d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:41.369710  365855 system_pods.go:89] "snapshot-controller-7d9fbc56b8-bbnmg" [16ac7458-41a3-4701-8352-abf5e11cf295] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1212 20:11:41.369714  365855 system_pods.go:89] "storage-provisioner" [89573572-bfc6-4422-b808-f1d27ef4ed3f] Running
	I1212 20:11:41.369723  365855 system_pods.go:126] duration metric: took 1.310440423s to wait for k8s-apps to be running ...
	I1212 20:11:41.369735  365855 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 20:11:41.369800  365855 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:11:41.390264  365855 system_svc.go:56] duration metric: took 20.519227ms WaitForService to wait for kubelet
	I1212 20:11:41.390307  365855 kubeadm.go:587] duration metric: took 44.651086478s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:11:41.390327  365855 node_conditions.go:102] verifying NodePressure condition ...
	I1212 20:11:41.393227  365855 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 20:11:41.393268  365855 node_conditions.go:123] node cpu capacity is 2
	I1212 20:11:41.393282  365855 node_conditions.go:105] duration metric: took 2.950236ms to run NodePressure ...
	I1212 20:11:41.393296  365855 start.go:242] waiting for startup goroutines ...
	I1212 20:11:41.575077  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:41.766632  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:41.766735  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:41.873715  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:42.081740  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:42.265941  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:42.267264  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:42.359802  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:42.575417  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:42.766577  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:42.767288  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:42.859480  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:43.083033  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:43.273425  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:43.273814  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:43.360476  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:43.575724  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:43.769435  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:43.769885  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:43.859599  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:44.089337  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:44.267831  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:44.274777  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:44.359765  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:44.576052  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:44.767803  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:44.768057  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:44.865938  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:45.081372  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:45.273656  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:45.278125  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:45.362070  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:45.577850  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:45.772076  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:45.772247  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:45.868884  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:46.079384  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:46.265710  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:46.266688  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:46.360051  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:46.576006  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:46.766818  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:46.767168  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:46.859415  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:47.081880  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:47.265294  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:47.265634  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:47.358798  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:47.574865  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:47.766625  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:47.767003  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:47.867414  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:48.080700  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:48.265406  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:48.267014  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:48.359872  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:48.575396  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:48.766507  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:48.766883  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:48.859046  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:49.082411  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:49.266900  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:49.267670  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:49.358481  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:49.574525  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:49.766376  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:49.766556  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:49.864421  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:50.082025  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:50.265288  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:50.265426  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:50.359229  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:50.574941  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:50.766295  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:50.766528  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:50.859472  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:51.079415  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:51.264854  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:51.265341  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:51.359436  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:51.575361  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:51.765873  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:51.766817  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:51.858668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:52.080256  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:52.266250  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:52.266408  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:52.359401  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:52.575008  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:52.765508  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:52.765663  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:52.859394  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:53.075168  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:53.266048  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:53.266578  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:53.359177  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:53.574814  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:53.766137  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:53.766529  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:53.863552  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:54.084279  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:54.266087  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:54.266219  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:54.359229  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:54.575265  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:54.767128  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:54.767565  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:54.859632  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:55.079052  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:55.266754  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:55.267235  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:55.359406  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:55.575419  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:55.766127  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:55.766445  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:55.866667  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:56.081432  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:56.276461  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:56.277455  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:56.359807  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:56.575671  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:56.765425  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:56.765572  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:56.859408  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:57.078081  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:57.265547  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:57.265684  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:57.358997  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:57.575494  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:57.767204  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:57.767656  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:57.859046  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:58.081477  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:58.266659  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:58.272008  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:58.367922  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:58.577829  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:58.765909  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:58.766062  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:58.859015  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:59.095952  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:59.265499  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:59.266209  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:59.359476  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:11:59.574418  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:11:59.765998  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:11:59.766106  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:11:59.859389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:00.214683  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:00.312468  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:00.312560  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:00.389482  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:00.575490  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:00.765692  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:00.766480  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:00.860221  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:01.081337  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:01.266569  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:01.266730  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:01.373186  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:01.577489  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:01.766094  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:01.767338  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:01.859699  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:02.081495  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:02.266346  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:02.266751  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:02.360992  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:02.585347  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:02.765606  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:02.766539  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:02.859598  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:03.082830  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:03.266957  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:03.268332  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:03.359084  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:03.575349  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:03.768301  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:03.768695  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:03.867688  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:04.082861  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:04.266230  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:04.267647  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:04.359323  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:04.574526  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:04.765995  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:04.766125  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:04.859494  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:05.081774  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:05.267254  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:05.267808  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:05.358750  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:05.575540  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:05.765840  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:05.766089  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:05.866088  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:06.094225  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:06.296428  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:06.296589  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:06.383588  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:06.575187  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:06.774246  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:06.774335  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:06.859092  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:07.090617  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:07.266952  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:07.267358  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:07.358593  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:07.575389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:07.766751  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:07.766886  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:07.858741  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:08.079480  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:08.265203  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:08.265649  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:08.359418  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:08.574933  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:08.765775  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:08.766194  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:08.858922  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:09.086272  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:09.266265  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:09.266496  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:09.359388  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:09.575033  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:09.766637  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:09.768105  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:09.859734  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:10.079956  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:10.267137  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:10.267485  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:10.359462  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:10.575361  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:10.767089  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:10.767408  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:10.859420  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:11.081779  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:11.266666  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:11.267086  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:11.366489  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:11.576551  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:11.766226  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:11.766682  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1212 20:12:11.860067  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:12.080261  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:12.267362  365855 kapi.go:107] duration metric: took 1m9.005813365s to wait for kubernetes.io/minikube-addons=registry ...
	I1212 20:12:12.268014  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:12.367845  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:12.575973  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:12.766461  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:12.859997  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:13.079643  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:13.266288  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:13.359513  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:13.581057  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:13.767784  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:13.858930  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:14.083111  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:14.266599  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:14.360777  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:14.575674  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:14.764737  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:14.859437  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:15.075693  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:15.265439  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:15.359746  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:15.575942  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:15.765114  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:15.859104  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:16.082368  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:16.265566  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:16.360216  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:16.574857  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:16.765456  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:16.859926  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:17.080860  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:17.265918  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:17.358876  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:17.576026  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:17.765686  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:17.860187  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:18.078314  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:18.265710  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:18.360469  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:18.575554  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:18.764995  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:18.859065  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:19.080457  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:19.266161  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:19.358902  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:19.576272  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:19.766225  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:19.859304  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:20.081220  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:20.278844  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:20.372560  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:20.576064  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:20.767310  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:20.859540  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:21.090518  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:21.271209  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:21.361938  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:21.576248  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:21.768439  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:21.861402  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:22.084186  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:22.266279  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:22.363021  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:22.575802  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:22.765191  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:22.859237  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:23.083133  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:23.266240  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:23.367496  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:23.575259  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:23.765580  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:23.859760  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:24.078856  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:24.265484  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:24.359553  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:24.575906  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:24.765651  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:24.858666  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:25.080329  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:25.265900  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:25.359405  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:25.574499  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:25.765866  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:25.859001  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:26.089125  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:26.269211  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:26.359641  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:26.576417  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:26.765377  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:26.860141  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:27.080776  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:27.266829  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:27.359080  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:27.577899  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:27.765552  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:27.860915  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:28.083178  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:28.265367  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:28.359081  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:28.575081  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:28.765594  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:28.859245  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:29.078465  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:29.265727  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:29.360044  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:29.575674  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:29.765318  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:29.859282  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:30.110560  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:30.265345  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:30.359169  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:30.575016  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:30.765476  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:30.859614  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:31.082672  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:31.265445  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:31.359721  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:31.576416  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:31.766061  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:31.859714  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:32.082695  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:32.264945  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:32.359058  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:32.576595  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:32.765629  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:32.870266  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:33.080659  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:33.267991  365855 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1212 20:12:33.367049  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:33.575662  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:33.764992  365855 kapi.go:107] duration metric: took 1m30.50343335s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1212 20:12:33.859035  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:34.078936  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:34.359436  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:34.589891  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:34.859890  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:35.078973  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:35.359389  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:35.575173  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:35.859668  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1212 20:12:36.076525  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:36.360788  365855 kapi.go:107] duration metric: took 1m29.50505875s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1212 20:12:36.363764  365855 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-603031 cluster.
	I1212 20:12:36.366615  365855 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1212 20:12:36.369575  365855 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1212 20:12:36.575952  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:37.080850  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:37.575026  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:38.075706  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:38.575065  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:39.083758  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:39.574629  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:40.076573  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:40.574797  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:41.078597  365855 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1212 20:12:41.575914  365855 kapi.go:107] duration metric: took 1m38.004754392s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1212 20:12:41.579090  365855 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, nvidia-device-plugin, registry-creds, storage-provisioner, metrics-server, storage-provisioner-rancher, inspektor-gadget, default-storageclass, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1212 20:12:41.581832  365855 addons.go:530] duration metric: took 1m44.842391025s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner nvidia-device-plugin registry-creds storage-provisioner metrics-server storage-provisioner-rancher inspektor-gadget default-storageclass yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1212 20:12:41.581908  365855 start.go:247] waiting for cluster config update ...
	I1212 20:12:41.581930  365855 start.go:256] writing updated cluster config ...
	I1212 20:12:41.582277  365855 ssh_runner.go:195] Run: rm -f paused
	I1212 20:12:41.587501  365855 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:41.591444  365855 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9rqzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.598300  365855 pod_ready.go:94] pod "coredns-66bc5c9577-9rqzw" is "Ready"
	I1212 20:12:41.598376  365855 pod_ready.go:86] duration metric: took 6.85569ms for pod "coredns-66bc5c9577-9rqzw" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.601478  365855 pod_ready.go:83] waiting for pod "etcd-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.607063  365855 pod_ready.go:94] pod "etcd-addons-603031" is "Ready"
	I1212 20:12:41.607131  365855 pod_ready.go:86] duration metric: took 5.585654ms for pod "etcd-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.609807  365855 pod_ready.go:83] waiting for pod "kube-apiserver-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.615323  365855 pod_ready.go:94] pod "kube-apiserver-addons-603031" is "Ready"
	I1212 20:12:41.615398  365855 pod_ready.go:86] duration metric: took 5.516131ms for pod "kube-apiserver-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.618608  365855 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:41.991965  365855 pod_ready.go:94] pod "kube-controller-manager-addons-603031" is "Ready"
	I1212 20:12:41.991998  365855 pod_ready.go:86] duration metric: took 373.32438ms for pod "kube-controller-manager-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:42.193320  365855 pod_ready.go:83] waiting for pod "kube-proxy-6c94h" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:42.592069  365855 pod_ready.go:94] pod "kube-proxy-6c94h" is "Ready"
	I1212 20:12:42.592098  365855 pod_ready.go:86] duration metric: took 398.743564ms for pod "kube-proxy-6c94h" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:42.792840  365855 pod_ready.go:83] waiting for pod "kube-scheduler-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:43.192157  365855 pod_ready.go:94] pod "kube-scheduler-addons-603031" is "Ready"
	I1212 20:12:43.192226  365855 pod_ready.go:86] duration metric: took 399.359275ms for pod "kube-scheduler-addons-603031" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 20:12:43.192248  365855 pod_ready.go:40] duration metric: took 1.604671963s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 20:12:43.246563  365855 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 20:12:43.250274  365855 out.go:179] * Done! kubectl is now configured to use "addons-603031" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.29109183Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1c22e8b87cea59e447932671e282fc6c7c20487b5dd28dfdcb0ff1454dad53c4 UID:1ce0a663-cd7a-4247-ba0d-fcaaf2f5818e NetNS:/var/run/netns/82d508db-8a7d-427a-9d6c-e1b9da72b5ba Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0x400012d380}] Aliases:map[]}"
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.291247738Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.294199745Z" level=info msg="Ran pod sandbox 1c22e8b87cea59e447932671e282fc6c7c20487b5dd28dfdcb0ff1454dad53c4 with infra container: default/busybox/POD" id=8e2f0048-0ff6-4e76-a608-cafcc692ab6d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.297100963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ae0343c8-25c4-4e99-bf26-1a9b5d89a91b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.297414813Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=ae0343c8-25c4-4e99-bf26-1a9b5d89a91b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.297470091Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=ae0343c8-25c4-4e99-bf26-1a9b5d89a91b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.298521631Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be6e9c05-7762-4f71-96b5-e924d27273f8 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:12:44 addons-603031 crio[829]: time="2025-12-12T20:12:44.300155306Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.491621261Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e" id=be6e9c05-7762-4f71-96b5-e924d27273f8 name=/runtime.v1.ImageService/PullImage
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.492325145Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e5b0b7cd-cc9c-4f1d-a900-faada305630f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.494772409Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=67d3316f-6fc7-4a1e-bff2-b97ba22dab8b name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.506445588Z" level=info msg="Creating container: default/busybox/busybox" id=ec3dd825-c653-4f80-b599-8f4c00db8baf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.506578898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.515345911Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.51586517Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.541711756Z" level=info msg="Created container 18f984869b4fbe2fd30fdeb711f4013448d957b97799b6f67e400a68ed15135f: default/busybox/busybox" id=ec3dd825-c653-4f80-b599-8f4c00db8baf name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.542584947Z" level=info msg="Starting container: 18f984869b4fbe2fd30fdeb711f4013448d957b97799b6f67e400a68ed15135f" id=927261e8-476a-4b6d-8aa2-69152818d5ba name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 20:12:46 addons-603031 crio[829]: time="2025-12-12T20:12:46.546034101Z" level=info msg="Started container" PID=4923 containerID=18f984869b4fbe2fd30fdeb711f4013448d957b97799b6f67e400a68ed15135f description=default/busybox/busybox id=927261e8-476a-4b6d-8aa2-69152818d5ba name=/runtime.v1.RuntimeService/StartContainer sandboxID=1c22e8b87cea59e447932671e282fc6c7c20487b5dd28dfdcb0ff1454dad53c4
	Dec 12 20:12:52 addons-603031 crio[829]: time="2025-12-12T20:12:52.450106398Z" level=info msg="Removing container: e4b84f4534c09fe58f8fab12a55e9c88bb9a625833d4c0fd1cb3a26d64ed568e" id=4841f267-b011-4eda-bc12-b9078821a708 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:52 addons-603031 crio[829]: time="2025-12-12T20:12:52.452432938Z" level=info msg="Error loading conmon cgroup of container e4b84f4534c09fe58f8fab12a55e9c88bb9a625833d4c0fd1cb3a26d64ed568e: cgroup deleted" id=4841f267-b011-4eda-bc12-b9078821a708 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:52 addons-603031 crio[829]: time="2025-12-12T20:12:52.465713166Z" level=info msg="Removed container e4b84f4534c09fe58f8fab12a55e9c88bb9a625833d4c0fd1cb3a26d64ed568e: gcp-auth/gcp-auth-certs-create-wkcbf/create" id=4841f267-b011-4eda-bc12-b9078821a708 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 20:12:52 addons-603031 crio[829]: time="2025-12-12T20:12:52.468618257Z" level=info msg="Stopping pod sandbox: ef9ee8e38e4e19bdadd01ec43acec6c75770014085b623cde7a41a78c96afc06" id=c11a7432-2883-448c-a372-ab69b76bb540 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 20:12:52 addons-603031 crio[829]: time="2025-12-12T20:12:52.468833799Z" level=info msg="Stopped pod sandbox (already stopped): ef9ee8e38e4e19bdadd01ec43acec6c75770014085b623cde7a41a78c96afc06" id=c11a7432-2883-448c-a372-ab69b76bb540 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 12 20:12:52 addons-603031 crio[829]: time="2025-12-12T20:12:52.469324898Z" level=info msg="Removing pod sandbox: ef9ee8e38e4e19bdadd01ec43acec6c75770014085b623cde7a41a78c96afc06" id=d6354cc7-4dcd-4917-b966-5dc49ed5c7ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 12 20:12:52 addons-603031 crio[829]: time="2025-12-12T20:12:52.47492272Z" level=info msg="Removed pod sandbox: ef9ee8e38e4e19bdadd01ec43acec6c75770014085b623cde7a41a78c96afc06" id=d6354cc7-4dcd-4917-b966-5dc49ed5c7ca name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	18f984869b4fb       gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e                                          8 seconds ago        Running             busybox                                  0                   1c22e8b87cea5       busybox                                     default
	2fd403a0a3c1f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:bd6b8417b2a83e66ab1d4c1193bb2774f027745bdebbd9e0c1a6518afdecc39a                          14 seconds ago       Running             csi-snapshotter                          0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	4ad63355a4185       registry.k8s.io/sig-storage/csi-provisioner@sha256:98ffd09c0784203d200e0f8c241501de31c8df79644caac7eed61bd6391e5d49                          15 seconds ago       Running             csi-provisioner                          0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	38a91c939e267       registry.k8s.io/sig-storage/livenessprobe@sha256:8b00c6e8f52639ed9c6f866085893ab688e57879741b3089e3cfa9998502e158                            16 seconds ago       Running             liveness-probe                           0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	de7b51c83e158       registry.k8s.io/sig-storage/hostpathplugin@sha256:7b1dfc90a367222067fc468442fdf952e20fc5961f25c1ad654300ddc34d7083                           17 seconds ago       Running             hostpath                                 0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	6b2e11d8ab454       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:2de98fa4b397f92e5e8e05d73caf21787a1c72c41378f3eb7bad72b1e0f4e9ff                                 19 seconds ago       Running             gcp-auth                                 0                   1fd2aa9552077       gcp-auth-78565c9fb4-fm95l                   gcp-auth
	b4b133c92eb73       registry.k8s.io/ingress-nginx/controller@sha256:75494e2145fbebf362d24e24e9285b7fbb7da8783ab272092e3126e24ee4776d                             22 seconds ago       Running             controller                               0                   aca915a799585       ingress-nginx-controller-85d4c799dd-xvdhs   ingress-nginx
	682df2b59c950       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:fadc7bf59b69965b6707edb68022bed4f55a1f99b15f7acd272793e48f171496                            29 seconds ago       Running             gadget                                   0                   aedd69022f38e       gadget-ldgrj                                gadget
	4809fb232f668       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:511b8c8ac828194a753909d26555ff08bc12f497dd8daeb83fe9d593693a26c1                33 seconds ago       Running             node-driver-registrar                    0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	a82f1ede56743       docker.io/kicbase/minikube-ingress-dns@sha256:6d710af680d8a9b5a5b1f9047eb83ee4c9258efd3fcd962f938c00bcbb4c5958                               34 seconds ago       Running             minikube-ingress-dns                     0                   eb68379552547       kube-ingress-dns-minikube                   kube-system
	9c3118d5851c9       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             35 seconds ago       Exited              patch                                    2                   3fea644d7881e       ingress-nginx-admission-patch-9v2hg         ingress-nginx
	0d7b544d4350e       e8105550077f5c6c8e92536651451107053f0e41635396ee42aef596441c179a                                                                             35 seconds ago       Exited              patch                                    2                   14fc32f3169db       gcp-auth-certs-patch-kbzrd                  gcp-auth
	f53fc93dd83c0       gcr.io/k8s-minikube/kube-registry-proxy@sha256:26c84a64530a67aa4d749dd4356d67ea27a2576e4d25b640d21857b0574cfd4b                              43 seconds ago       Running             registry-proxy                           0                   0b6cb28584b7b       registry-proxy-2ppkm                        kube-system
	e415e482778c5       registry.k8s.io/sig-storage/csi-attacher@sha256:4b5609c78455de45821910065281a368d5f760b41250f90cbde5110543bdc326                             47 seconds ago       Running             csi-attacher                             0                   4884db98bbc3d       csi-hostpath-attacher-0                     kube-system
	d5c2cf4090c13       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      49 seconds ago       Running             volume-snapshot-controller               0                   4423bef11c1a1       snapshot-controller-7d9fbc56b8-bbnmg        kube-system
	74073eb172ae6       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:c9c1ef89e4bb9d6c9c6c0b5375c3253a0b951e5b731240be20cebe5593de142d                   49 seconds ago       Exited              create                                   0                   bd40fbc5d9bdf       ingress-nginx-admission-create-szxsl        ingress-nginx
	9fac64cdd9389       docker.io/library/registry@sha256:8715992817b2254fe61e74ffc6a4096d57a0cde36c95ea075676c05f7a94a630                                           49 seconds ago       Running             registry                                 0                   bbae55c095373       registry-6b586f9694-7qdmt                   kube-system
	bc9cffb778ec6       gcr.io/cloud-spanner-emulator/emulator@sha256:daeab9cb1978e02113045625e2633619f465f22aac7638101995f4cd03607170                               51 seconds ago       Running             cloud-spanner-emulator                   0                   224c78c64804d       cloud-spanner-emulator-5bdddb765-cfmxl      default
	bfb13326e68f2       nvcr.io/nvidia/k8s-device-plugin@sha256:80924fc52384565a7c59f1e2f12319fb8f2b02a1c974bb3d73a9853fe01af874                                     58 seconds ago       Running             nvidia-device-plugin-ctr                 0                   0d3c5e66e6897       nvidia-device-plugin-daemonset-sthfk        kube-system
	421200960de75       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:8b9df00898ded1bfb4d8f3672679f29cd9f88e651b76fef64121c8d347dd12c0   About a minute ago   Running             csi-external-health-monitor-controller   0                   cf48359c46623       csi-hostpathplugin-5b869                    kube-system
	f3c43f32965a1       registry.k8s.io/sig-storage/snapshot-controller@sha256:5d668e35c15df6e87e2530da25d557f543182cedbdb39d421b87076463ee9857                      About a minute ago   Running             volume-snapshot-controller               0                   4f93b1a079d55       snapshot-controller-7d9fbc56b8-5jcb6        kube-system
	ce0ade5e7b384       registry.k8s.io/sig-storage/csi-resizer@sha256:82c1945463342884c05a5b2bc31319712ce75b154c279c2a10765f61e0f688af                              About a minute ago   Running             csi-resizer                              0                   c7d04fbcab4b4       csi-hostpath-resizer-0                      kube-system
	a11300ef5861d       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98                             About a minute ago   Running             local-path-provisioner                   0                   7fa05a10ffe57       local-path-provisioner-648f6765c9-c98md     local-path-storage
	1244a5a603a61       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                                              About a minute ago   Running             yakd                                     0                   1f7d8bd186c41       yakd-dashboard-5ff678cb9-v447b              yakd-dashboard
	bdd23d655fa55       registry.k8s.io/metrics-server/metrics-server@sha256:8f49cf1b0688bb0eae18437882dbf6de2c7a2baac71b1492bc4eca25439a1bf2                        About a minute ago   Running             metrics-server                           0                   a7b036bac3c0e       metrics-server-85b7d694d7-q8cmr             kube-system
	dc26db242e241       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc                                                                             About a minute ago   Running             coredns                                  0                   006bc26307510       coredns-66bc5c9577-9rqzw                    kube-system
	e1266d6c75a1e       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                                             About a minute ago   Running             storage-provisioner                      0                   92df0428f10b7       storage-provisioner                         kube-system
	f05b6cd78460f       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c                                                                             About a minute ago   Running             kindnet-cni                              0                   73e7a3b46adfc       kindnet-2dtkn                               kube-system
	d5b835e400afb       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786                                                                             About a minute ago   Running             kube-proxy                               0                   f7a64e29d4a80       kube-proxy-6c94h                            kube-system
	6b921948e7a2b       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949                                                                             2 minutes ago        Running             kube-scheduler                           0                   255b97b30c218       kube-scheduler-addons-603031                kube-system
	389edf543c495       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2                                                                             2 minutes ago        Running             kube-controller-manager                  0                   9078f5134f4dd       kube-controller-manager-addons-603031       kube-system
	e4de15886f671       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42                                                                             2 minutes ago        Running             etcd                                     0                   7219ed53321da       etcd-addons-603031                          kube-system
	53fcf67696a94       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7                                                                             2 minutes ago        Running             kube-apiserver                           0                   f469cb4a159bb       kube-apiserver-addons-603031                kube-system
	
	
	==> coredns [dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd] <==
	[INFO] 10.244.0.18:41736 - 57968 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000082741s
	[INFO] 10.244.0.18:41736 - 58589 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002039128s
	[INFO] 10.244.0.18:41736 - 54604 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001713315s
	[INFO] 10.244.0.18:41736 - 28307 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000126131s
	[INFO] 10.244.0.18:41736 - 27888 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000183042s
	[INFO] 10.244.0.18:60608 - 43013 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000162858s
	[INFO] 10.244.0.18:60608 - 43507 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000088198s
	[INFO] 10.244.0.18:39190 - 2110 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119181s
	[INFO] 10.244.0.18:39190 - 1905 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008535s
	[INFO] 10.244.0.18:38121 - 6508 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000092678s
	[INFO] 10.244.0.18:38121 - 6328 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000143805s
	[INFO] 10.244.0.18:34529 - 11294 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001166232s
	[INFO] 10.244.0.18:34529 - 11738 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001082088s
	[INFO] 10.244.0.18:57201 - 37741 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000106454s
	[INFO] 10.244.0.18:57201 - 37610 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000085884s
	[INFO] 10.244.0.21:41906 - 18684 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000228302s
	[INFO] 10.244.0.21:37023 - 10435 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000100637s
	[INFO] 10.244.0.21:48668 - 58272 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000280389s
	[INFO] 10.244.0.21:55727 - 3722 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00016467s
	[INFO] 10.244.0.21:33592 - 58357 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000180384s
	[INFO] 10.244.0.21:33038 - 21713 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000099611s
	[INFO] 10.244.0.21:45901 - 10876 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00256363s
	[INFO] 10.244.0.21:57361 - 61820 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002532237s
	[INFO] 10.244.0.21:38930 - 7311 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002201074s
	[INFO] 10.244.0.21:56790 - 39448 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.002486305s
	
	
	==> describe nodes <==
	Name:               addons-603031
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-603031
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=addons-603031
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_10_53_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-603031
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-603031"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:10:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-603031
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 20:12:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 20:12:55 +0000   Fri, 12 Dec 2025 20:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 20:12:55 +0000   Fri, 12 Dec 2025 20:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 20:12:55 +0000   Fri, 12 Dec 2025 20:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 20:12:55 +0000   Fri, 12 Dec 2025 20:11:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-603031
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                d0303866-b2d5-479a-a0a7-1e376c628274
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (26 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-cfmxl       0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  gadget                      gadget-ldgrj                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  gcp-auth                    gcp-auth-78565c9fb4-fm95l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-xvdhs    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         112s
	  kube-system                 coredns-66bc5c9577-9rqzw                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     118s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 csi-hostpathplugin-5b869                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 etcd-addons-603031                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m3s
	  kube-system                 kindnet-2dtkn                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      118s
	  kube-system                 kube-apiserver-addons-603031                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-addons-603031        200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-6c94h                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-scheduler-addons-603031                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 metrics-server-85b7d694d7-q8cmr              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         114s
	  kube-system                 nvidia-device-plugin-daemonset-sthfk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 registry-6b586f9694-7qdmt                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 registry-creds-764b6fb674-7zll2              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 registry-proxy-2ppkm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 snapshot-controller-7d9fbc56b8-5jcb6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 snapshot-controller-7d9fbc56b8-bbnmg         0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  local-path-storage          local-path-provisioner-648f6765c9-c98md      0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v447b               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 115s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node addons-603031 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node addons-603031 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node addons-603031 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m3s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m3s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m3s                   kubelet          Node addons-603031 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m3s                   kubelet          Node addons-603031 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s                   kubelet          Node addons-603031 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           119s                   node-controller  Node addons-603031 event: Registered Node addons-603031 in Controller
	  Normal   NodeReady                76s                    kubelet          Node addons-603031 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6] <==
	{"level":"warn","ts":"2025-12-12T20:10:48.222876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.248837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.297611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.304938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.319028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.341018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.360815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.378412Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.394557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.438087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.440355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.465084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.479204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.516016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.535545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.561777Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.577536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.598577Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:10:48.700814Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:03.857092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:03.868791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.439220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39626","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.455026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.503394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T20:11:26.518728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39686","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [6b2e11d8ab454cbe479a492a15f224b1b0c5b514bf7e21f46cc09b191184004a] <==
	2025/12/12 20:12:35 GCP Auth Webhook started!
	2025/12/12 20:12:43 Ready to marshal response ...
	2025/12/12 20:12:43 Ready to write response ...
	2025/12/12 20:12:43 Ready to marshal response ...
	2025/12/12 20:12:43 Ready to write response ...
	2025/12/12 20:12:44 Ready to marshal response ...
	2025/12/12 20:12:44 Ready to write response ...
	
	
	==> kernel <==
	 20:12:55 up  2:55,  0 user,  load average: 3.77, 2.70, 2.07
	Linux addons-603031 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896] <==
	I1212 20:10:59.530821       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 20:10:59.531619       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1212 20:11:29.531822       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 20:11:29.531826       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1212 20:11:29.531923       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1212 20:11:29.532000       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1212 20:11:31.031360       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 20:11:31.031467       1 metrics.go:72] Registering metrics
	I1212 20:11:31.031597       1 controller.go:711] "Syncing nftables rules"
	I1212 20:11:39.534674       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:11:39.534730       1 main.go:301] handling current node
	I1212 20:11:49.531280       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:11:49.531358       1 main.go:301] handling current node
	I1212 20:11:59.530820       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:11:59.530931       1 main.go:301] handling current node
	I1212 20:12:09.531366       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:12:09.531401       1 main.go:301] handling current node
	I1212 20:12:19.532459       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:12:19.532488       1 main.go:301] handling current node
	I1212 20:12:29.532432       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:12:29.532495       1 main.go:301] handling current node
	I1212 20:12:39.531312       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:12:39.531342       1 main.go:301] handling current node
	I1212 20:12:49.530681       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 20:12:49.530743       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6] <==
	I1212 20:11:03.430142       1 controller.go:667] quota admission added evaluator for: statefulsets.apps
	I1212 20:11:03.517033       1 alloc.go:328] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.111.237.216"}
	W1212 20:11:03.845812       1 logging.go:55] [core] [Channel #259 SubChannel #260]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 20:11:03.867017       1 logging.go:55] [core] [Channel #263 SubChannel #264]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1212 20:11:06.715732       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.99.221.218"}
	W1212 20:11:26.439220       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 20:11:26.455013       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 20:11:26.503364       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 20:11:26.518729       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 20:11:39.854638       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.221.218:443: connect: connection refused
	E1212 20:11:39.854755       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.221.218:443: connect: connection refused" logger="UnhandledError"
	W1212 20:11:39.855264       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.221.218:443: connect: connection refused
	E1212 20:11:39.855380       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.221.218:443: connect: connection refused" logger="UnhandledError"
	W1212 20:11:39.946369       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.99.221.218:443: connect: connection refused
	E1212 20:11:39.946415       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.99.221.218:443: connect: connection refused" logger="UnhandledError"
	W1212 20:11:45.748762       1 handler_proxy.go:99] no RequestInfo found in the context
	E1212 20:11:45.748899       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1212 20:11:45.750834       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.106.141:443: connect: connection refused" logger="UnhandledError"
	E1212 20:11:45.752007       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.106.141:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.109.106.141:443: connect: connection refused" logger="UnhandledError"
	I1212 20:11:45.865650       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1212 20:12:53.461609       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54952: use of closed network connection
	E1212 20:12:53.612253       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:54970: use of closed network connection
	
	
	==> kube-controller-manager [389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c] <==
	I1212 20:10:56.452720       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 20:10:56.452744       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 20:10:56.452749       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 20:10:56.452754       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 20:10:56.461828       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-603031" podCIDRs=["10.244.0.0/24"]
	I1212 20:10:56.461837       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 20:10:56.467616       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 20:10:56.469083       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 20:10:56.469053       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1212 20:10:56.469294       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 20:10:56.469368       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1212 20:10:56.469921       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 20:10:56.470737       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 20:10:56.471129       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 20:10:56.471668       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1212 20:10:56.475582       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1212 20:11:01.726060       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1212 20:11:26.429942       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1212 20:11:26.430108       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1212 20:11:26.430156       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1212 20:11:26.483835       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1212 20:11:26.493294       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1212 20:11:26.530788       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 20:11:26.595275       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 20:11:41.422821       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165] <==
	I1212 20:10:59.362862       1 server_linux.go:53] "Using iptables proxy"
	I1212 20:10:59.485423       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 20:10:59.588529       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 20:10:59.592137       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 20:10:59.592236       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 20:10:59.661925       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 20:10:59.661982       1 server_linux.go:132] "Using iptables Proxier"
	I1212 20:10:59.669465       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 20:10:59.670109       1 server.go:527] "Version info" version="v1.34.2"
	I1212 20:10:59.670128       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:10:59.677706       1 config.go:200] "Starting service config controller"
	I1212 20:10:59.677723       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 20:10:59.677871       1 config.go:106] "Starting endpoint slice config controller"
	I1212 20:10:59.677875       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 20:10:59.678052       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 20:10:59.678057       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 20:10:59.685652       1 config.go:309] "Starting node config controller"
	I1212 20:10:59.686140       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 20:10:59.686152       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 20:10:59.777928       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 20:10:59.778017       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1212 20:10:59.778250       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3] <==
	I1212 20:10:50.823442       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 20:10:50.826062       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 20:10:50.826755       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 20:10:50.826781       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 20:10:50.835917       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 20:10:50.839834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 20:10:50.840294       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 20:10:50.840461       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 20:10:50.840631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 20:10:50.840700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 20:10:50.840786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 20:10:50.840857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 20:10:50.840896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 20:10:50.840941       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 20:10:50.841054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 20:10:50.841130       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 20:10:50.841177       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 20:10:50.841254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 20:10:50.841290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 20:10:50.841368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 20:10:50.841416       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 20:10:50.841551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 20:10:50.841602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 20:10:50.841669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1212 20:10:52.336594       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 20:12:21 addons-603031 kubelet[1291]: I1212 20:12:21.788127    1291 scope.go:117] "RemoveContainer" containerID="b62b68c381a81dda13f02b97270922d8df97558c25b1db432ee5c1babf60cf2e"
	Dec 12 20:12:22 addons-603031 kubelet[1291]: I1212 20:12:22.367907    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs9m5\" (UniqueName: \"kubernetes.io/projected/9a48027b-7d8a-44ad-a2df-d5e4567a6acb-kube-api-access-cs9m5\") pod \"9a48027b-7d8a-44ad-a2df-d5e4567a6acb\" (UID: \"9a48027b-7d8a-44ad-a2df-d5e4567a6acb\") "
	Dec 12 20:12:22 addons-603031 kubelet[1291]: I1212 20:12:22.368590    1291 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5trzs\" (UniqueName: \"kubernetes.io/projected/661a86de-9f93-43a7-8a99-f123aa9ee271-kube-api-access-5trzs\") pod \"661a86de-9f93-43a7-8a99-f123aa9ee271\" (UID: \"661a86de-9f93-43a7-8a99-f123aa9ee271\") "
	Dec 12 20:12:22 addons-603031 kubelet[1291]: I1212 20:12:22.372403    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661a86de-9f93-43a7-8a99-f123aa9ee271-kube-api-access-5trzs" (OuterVolumeSpecName: "kube-api-access-5trzs") pod "661a86de-9f93-43a7-8a99-f123aa9ee271" (UID: "661a86de-9f93-43a7-8a99-f123aa9ee271"). InnerVolumeSpecName "kube-api-access-5trzs". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 20:12:22 addons-603031 kubelet[1291]: I1212 20:12:22.374299    1291 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a48027b-7d8a-44ad-a2df-d5e4567a6acb-kube-api-access-cs9m5" (OuterVolumeSpecName: "kube-api-access-cs9m5") pod "9a48027b-7d8a-44ad-a2df-d5e4567a6acb" (UID: "9a48027b-7d8a-44ad-a2df-d5e4567a6acb"). InnerVolumeSpecName "kube-api-access-cs9m5". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 12 20:12:22 addons-603031 kubelet[1291]: I1212 20:12:22.470114    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cs9m5\" (UniqueName: \"kubernetes.io/projected/9a48027b-7d8a-44ad-a2df-d5e4567a6acb-kube-api-access-cs9m5\") on node \"addons-603031\" DevicePath \"\""
	Dec 12 20:12:22 addons-603031 kubelet[1291]: I1212 20:12:22.470153    1291 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5trzs\" (UniqueName: \"kubernetes.io/projected/661a86de-9f93-43a7-8a99-f123aa9ee271-kube-api-access-5trzs\") on node \"addons-603031\" DevicePath \"\""
	Dec 12 20:12:23 addons-603031 kubelet[1291]: I1212 20:12:23.216776    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fea644d7881e99c9bd27b03196090d47e8c2a96970439cdf3384ae7064d202e"
	Dec 12 20:12:23 addons-603031 kubelet[1291]: I1212 20:12:23.223194    1291 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14fc32f3169db19a58c436e34d61122ac0ffd999e0f15fcd822a9ef2927796ef"
	Dec 12 20:12:31 addons-603031 kubelet[1291]: I1212 20:12:31.738548    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-ldgrj" podStartSLOduration=70.659579021 podStartE2EDuration="1m29.738530405s" podCreationTimestamp="2025-12-12 20:11:02 +0000 UTC" firstStartedPulling="2025-12-12 20:12:06.565454613 +0000 UTC m=+74.286631335" lastFinishedPulling="2025-12-12 20:12:25.644405981 +0000 UTC m=+93.365582719" observedRunningTime="2025-12-12 20:12:26.267559345 +0000 UTC m=+93.988736067" watchObservedRunningTime="2025-12-12 20:12:31.738530405 +0000 UTC m=+99.459707118"
	Dec 12 20:12:33 addons-603031 kubelet[1291]: I1212 20:12:33.279566    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-xvdhs" podStartSLOduration=69.779359559 podStartE2EDuration="1m30.279535883s" podCreationTimestamp="2025-12-12 20:11:03 +0000 UTC" firstStartedPulling="2025-12-12 20:12:12.28521112 +0000 UTC m=+80.006387834" lastFinishedPulling="2025-12-12 20:12:32.785387444 +0000 UTC m=+100.506564158" observedRunningTime="2025-12-12 20:12:33.278177213 +0000 UTC m=+100.999353943" watchObservedRunningTime="2025-12-12 20:12:33.279535883 +0000 UTC m=+101.000712596"
	Dec 12 20:12:36 addons-603031 kubelet[1291]: I1212 20:12:36.291282    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-fm95l" podStartSLOduration=67.098875573 podStartE2EDuration="1m30.291264631s" podCreationTimestamp="2025-12-12 20:11:06 +0000 UTC" firstStartedPulling="2025-12-12 20:12:12.751233638 +0000 UTC m=+80.472410352" lastFinishedPulling="2025-12-12 20:12:35.943622696 +0000 UTC m=+103.664799410" observedRunningTime="2025-12-12 20:12:36.289072804 +0000 UTC m=+104.010249534" watchObservedRunningTime="2025-12-12 20:12:36.291264631 +0000 UTC m=+104.012441345"
	Dec 12 20:12:38 addons-603031 kubelet[1291]: I1212 20:12:38.625570    1291 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 12 20:12:38 addons-603031 kubelet[1291]: I1212 20:12:38.625657    1291 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 12 20:12:40 addons-603031 kubelet[1291]: I1212 20:12:40.426307    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba41c853-354c-4f47-bf56-8bb1772a5dc1" path="/var/lib/kubelet/pods/ba41c853-354c-4f47-bf56-8bb1772a5dc1/volumes"
	Dec 12 20:12:41 addons-603031 kubelet[1291]: I1212 20:12:41.342982    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-5b869" podStartSLOduration=1.868751244 podStartE2EDuration="1m2.342964877s" podCreationTimestamp="2025-12-12 20:11:39 +0000 UTC" firstStartedPulling="2025-12-12 20:11:40.697092978 +0000 UTC m=+48.418269691" lastFinishedPulling="2025-12-12 20:12:41.17130661 +0000 UTC m=+108.892483324" observedRunningTime="2025-12-12 20:12:41.339568713 +0000 UTC m=+109.060745451" watchObservedRunningTime="2025-12-12 20:12:41.342964877 +0000 UTC m=+109.064141591"
	Dec 12 20:12:43 addons-603031 kubelet[1291]: E1212 20:12:43.795230    1291 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 12 20:12:43 addons-603031 kubelet[1291]: E1212 20:12:43.795824    1291 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e7c826f-448c-4599-a385-861be612bf36-gcr-creds podName:3e7c826f-448c-4599-a385-861be612bf36 nodeName:}" failed. No retries permitted until 2025-12-12 20:13:47.795800957 +0000 UTC m=+175.516977671 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/3e7c826f-448c-4599-a385-861be612bf36-gcr-creds") pod "registry-creds-764b6fb674-7zll2" (UID: "3e7c826f-448c-4599-a385-861be612bf36") : secret "registry-creds-gcr" not found
	Dec 12 20:12:43 addons-603031 kubelet[1291]: I1212 20:12:43.996828    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tk9f\" (UniqueName: \"kubernetes.io/projected/1ce0a663-cd7a-4247-ba0d-fcaaf2f5818e-kube-api-access-6tk9f\") pod \"busybox\" (UID: \"1ce0a663-cd7a-4247-ba0d-fcaaf2f5818e\") " pod="default/busybox"
	Dec 12 20:12:43 addons-603031 kubelet[1291]: I1212 20:12:43.997115    1291 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1ce0a663-cd7a-4247-ba0d-fcaaf2f5818e-gcp-creds\") pod \"busybox\" (UID: \"1ce0a663-cd7a-4247-ba0d-fcaaf2f5818e\") " pod="default/busybox"
	Dec 12 20:12:52 addons-603031 kubelet[1291]: I1212 20:12:52.448900    1291 scope.go:117] "RemoveContainer" containerID="e4b84f4534c09fe58f8fab12a55e9c88bb9a625833d4c0fd1cb3a26d64ed568e"
	Dec 12 20:12:52 addons-603031 kubelet[1291]: E1212 20:12:52.581250    1291 fsHandler.go:119] failed to collect filesystem stats - rootDiskErr: could not stat "/var/lib/containers/storage/overlay/dd270602adf5cb70b2d9a9886193a20a3e873a55ba72a7c405bffa391cbff980/diff" to get inode usage: stat /var/lib/containers/storage/overlay/dd270602adf5cb70b2d9a9886193a20a3e873a55ba72a7c405bffa391cbff980/diff: no such file or directory, extraDiskErr: <nil>
	Dec 12 20:12:53 addons-603031 kubelet[1291]: I1212 20:12:53.034152    1291 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=7.838488455 podStartE2EDuration="10.034133639s" podCreationTimestamp="2025-12-12 20:12:43 +0000 UTC" firstStartedPulling="2025-12-12 20:12:44.297763223 +0000 UTC m=+112.018939936" lastFinishedPulling="2025-12-12 20:12:46.493408406 +0000 UTC m=+114.214585120" observedRunningTime="2025-12-12 20:12:47.378431391 +0000 UTC m=+115.099608146" watchObservedRunningTime="2025-12-12 20:12:53.034133639 +0000 UTC m=+120.755310361"
	Dec 12 20:12:53 addons-603031 kubelet[1291]: E1212 20:12:53.231249    1291 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:38174->127.0.0.1:39861: write tcp 127.0.0.1:38174->127.0.0.1:39861: write: broken pipe
	Dec 12 20:12:54 addons-603031 kubelet[1291]: I1212 20:12:54.424462    1291 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a48027b-7d8a-44ad-a2df-d5e4567a6acb" path="/var/lib/kubelet/pods/9a48027b-7d8a-44ad-a2df-d5e4567a6acb/volumes"
	
	
	==> storage-provisioner [e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22] <==
	W1212 20:12:31.317879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:33.321325       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:33.325932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:35.328686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:35.335617       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:37.338475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:37.343235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:39.346399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:39.352598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:41.369767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:41.378589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:43.381263       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:43.389077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:45.394130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:45.399356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:47.403282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:47.408155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:49.411514       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:49.416403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:51.420455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:51.425030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.429159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:53.440623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:55.443865       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1212 20:12:55.450269       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-603031 -n addons-603031
helpers_test.go:270: (dbg) Run:  kubectl --context addons-603031 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-603031 describe pod ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-603031 describe pod ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2: exit status 1 (104.908951ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-szxsl" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9v2hg" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-7zll2" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-603031 describe pod ingress-nginx-admission-create-szxsl ingress-nginx-admission-patch-9v2hg registry-creds-764b6fb674-7zll2: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable headlamp --alsologtostderr -v=1: exit status 11 (277.155625ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:12:56.942798  372433 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:12:56.943646  372433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:56.943666  372433 out.go:374] Setting ErrFile to fd 2...
	I1212 20:12:56.943673  372433 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:12:56.943983  372433 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:12:56.944334  372433 mustload.go:66] Loading cluster: addons-603031
	I1212 20:12:56.945006  372433 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:56.945031  372433 addons.go:622] checking whether the cluster is paused
	I1212 20:12:56.945202  372433 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:12:56.945221  372433 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:12:56.945793  372433 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:12:56.963192  372433 ssh_runner.go:195] Run: systemctl --version
	I1212 20:12:56.963259  372433 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:12:56.988880  372433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:12:57.099012  372433 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:12:57.099150  372433 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:12:57.129246  372433 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:12:57.129271  372433 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:12:57.129280  372433 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:12:57.129285  372433 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:12:57.129288  372433 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:12:57.129292  372433 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:12:57.129301  372433 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:12:57.129313  372433 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:12:57.129319  372433 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:12:57.129328  372433 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:12:57.129335  372433 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:12:57.129339  372433 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:12:57.129342  372433 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:12:57.129345  372433 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:12:57.129348  372433 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:12:57.129358  372433 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:12:57.129365  372433 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:12:57.129376  372433 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:12:57.129391  372433 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:12:57.129395  372433 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:12:57.129400  372433 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:12:57.129405  372433 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:12:57.129409  372433 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:12:57.129412  372433 cri.go:89] found id: ""
	I1212 20:12:57.129474  372433 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:12:57.144804  372433 out.go:203] 
	W1212 20:12:57.147683  372433 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:12:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:12:57.147714  372433 out.go:285] * 
	* 
	W1212 20:12:57.152761  372433 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:12:57.155508  372433 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (3.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-cfmxl" [43065036-cfbd-4e36-8abd-647fe878b445] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.004319036s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (271.767233ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:15.781587  372907 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:15.782724  372907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:15.782742  372907 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:15.782747  372907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:15.783441  372907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:15.783787  372907 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:15.784161  372907 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:15.784180  372907 addons.go:622] checking whether the cluster is paused
	I1212 20:13:15.784285  372907 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:15.784300  372907 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:15.784899  372907 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:15.801505  372907 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:15.801563  372907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:15.821692  372907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:15.931142  372907 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:15.931237  372907 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:15.970283  372907 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:15.970307  372907 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:15.970313  372907 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:15.970316  372907 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:15.970320  372907 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:15.970323  372907 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:15.970327  372907 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:15.970330  372907 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:15.970333  372907 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:15.970341  372907 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:15.970345  372907 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:15.970348  372907 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:15.970352  372907 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:15.970355  372907 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:15.970360  372907 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:15.970371  372907 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:15.970374  372907 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:15.970379  372907 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:15.970383  372907 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:15.970386  372907 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:15.970395  372907 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:15.970399  372907 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:15.970404  372907 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:15.970407  372907 cri.go:89] found id: ""
	I1212 20:13:15.970458  372907 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:15.987150  372907 out.go:203] 
	W1212 20:13:15.990117  372907 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:15.990153  372907 out.go:285] * 
	* 
	W1212 20:13:15.995312  372907 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:15.998246  372907 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (6.29s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.45s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-603031 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-603031 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-603031 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [c5b4b5e0-1196-4d4b-b826-4a85afaccea9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [c5b4b5e0-1196-4d4b-b826-4a85afaccea9] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [c5b4b5e0-1196-4d4b-b826-4a85afaccea9] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003445821s
addons_test.go:969: (dbg) Run:  kubectl --context addons-603031 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 ssh "cat /opt/local-path-provisioner/pvc-2335e9a8-fead-435f-8d4b-708dc5b5c2fe_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-603031 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-603031 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (277.124418ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:18.175807  373061 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:18.176467  373061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:18.176481  373061 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:18.176488  373061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:18.176769  373061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:18.177082  373061 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:18.177474  373061 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:18.177496  373061 addons.go:622] checking whether the cluster is paused
	I1212 20:13:18.177603  373061 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:18.177617  373061 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:18.178116  373061 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:18.200212  373061 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:18.200277  373061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:18.219102  373061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:18.327138  373061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:18.327222  373061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:18.365130  373061 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:18.365153  373061 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:18.365158  373061 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:18.365164  373061 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:18.365168  373061 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:18.365172  373061 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:18.365175  373061 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:18.365178  373061 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:18.365181  373061 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:18.365187  373061 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:18.365191  373061 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:18.365194  373061 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:18.365197  373061 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:18.365200  373061 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:18.365204  373061 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:18.365209  373061 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:18.365217  373061 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:18.365221  373061 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:18.365224  373061 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:18.365227  373061 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:18.365232  373061 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:18.365239  373061 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:18.365242  373061 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:18.365245  373061 cri.go:89] found id: ""
	I1212 20:13:18.365300  373061 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:18.381494  373061 out.go:203] 
	W1212 20:13:18.385020  373061 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:18.385043  373061 out.go:285] * 
	* 
	W1212 20:13:18.390100  373061 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:18.395241  373061 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.45s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-sthfk" [53d6fdbe-56a6-4389-a0fb-291144b3bed2] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003313161s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (285.573697ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:09.492262  372725 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:09.493111  372725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:09.493127  372725 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:09.493133  372725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:09.493410  372725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:09.493712  372725 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:09.494092  372725 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:09.494111  372725 addons.go:622] checking whether the cluster is paused
	I1212 20:13:09.494221  372725 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:09.494236  372725 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:09.494734  372725 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:09.517348  372725 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:09.517422  372725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:09.534733  372725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:09.643092  372725 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:09.643193  372725 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:09.685508  372725 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:09.685529  372725 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:09.685534  372725 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:09.685538  372725 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:09.685542  372725 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:09.685545  372725 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:09.685548  372725 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:09.685551  372725 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:09.685556  372725 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:09.685563  372725 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:09.685567  372725 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:09.685570  372725 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:09.685574  372725 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:09.685577  372725 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:09.685580  372725 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:09.685585  372725 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:09.685589  372725 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:09.685593  372725 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:09.685596  372725 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:09.685599  372725 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:09.685604  372725 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:09.685610  372725 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:09.685614  372725 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:09.685617  372725 cri.go:89] found id: ""
	I1212 20:13:09.685669  372725 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:09.702088  372725 out.go:203] 
	W1212 20:13:09.705556  372725 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:09.705605  372725 out.go:285] * 
	* 
	W1212 20:13:09.710950  372725 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:09.714805  372725 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.29s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-v447b" [fe96e256-3cee-41e6-aad3-fff5cecc2eb1] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002931917s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-arm64 -p addons-603031 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-603031 addons disable yakd --alsologtostderr -v=1: exit status 11 (265.171802ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:13:03.212928  372499 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:13:03.213727  372499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:03.213765  372499 out.go:374] Setting ErrFile to fd 2...
	I1212 20:13:03.213787  372499 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:13:03.214101  372499 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:13:03.214501  372499 mustload.go:66] Loading cluster: addons-603031
	I1212 20:13:03.214968  372499 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:03.215011  372499 addons.go:622] checking whether the cluster is paused
	I1212 20:13:03.215167  372499 config.go:182] Loaded profile config "addons-603031": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:13:03.215199  372499 host.go:66] Checking if "addons-603031" exists ...
	I1212 20:13:03.215883  372499 cli_runner.go:164] Run: docker container inspect addons-603031 --format={{.State.Status}}
	I1212 20:13:03.234592  372499 ssh_runner.go:195] Run: systemctl --version
	I1212 20:13:03.234708  372499 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-603031
	I1212 20:13:03.258552  372499 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33147 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/addons-603031/id_rsa Username:docker}
	I1212 20:13:03.366088  372499 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:13:03.366194  372499 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:13:03.396678  372499 cri.go:89] found id: "2fd403a0a3c1f792b834f745c31d8bfcebce4caf685ec58f02cf7567341378a7"
	I1212 20:13:03.396703  372499 cri.go:89] found id: "4ad63355a4185dacfe5649e3bfc87b0a981dee2377c3bc30ddbda4c7f1f15906"
	I1212 20:13:03.396708  372499 cri.go:89] found id: "38a91c939e2679c8ba17c9f4a01e487e245c8c4b5fb107650d20cbd768079f2f"
	I1212 20:13:03.396712  372499 cri.go:89] found id: "de7b51c83e158b78ebb2dd21e940397331831da19c8c874083b4f5d7bcc05bda"
	I1212 20:13:03.396715  372499 cri.go:89] found id: "4809fb232f66882791f07746602cbcd7913905247fc496d42b7d3f27350575ef"
	I1212 20:13:03.396718  372499 cri.go:89] found id: "a82f1ede5674394e7124a41291e097b74fc08e962d87ccdd7b1282a0898ebac9"
	I1212 20:13:03.396721  372499 cri.go:89] found id: "f53fc93dd83c09c0b5144fae467663cecd9cf0a753ff3a57aaaec97109aca2be"
	I1212 20:13:03.396745  372499 cri.go:89] found id: "e415e482778c5d261f840b54e120ed9819267bc60b688e7aeb8032560021c173"
	I1212 20:13:03.396756  372499 cri.go:89] found id: "d5c2cf4090c13f9b1e2fac796f262a890a9aad56fef9ee64efaf94d985b283b1"
	I1212 20:13:03.396763  372499 cri.go:89] found id: "9fac64cdd9389952b2e17e58f33431aebf12e50a96f5b6fda20af61ab9e88e96"
	I1212 20:13:03.396767  372499 cri.go:89] found id: "bfb13326e68f27ee58a131a35efc99011c1c81fef6c11ea69937d5f3a4603f9c"
	I1212 20:13:03.396770  372499 cri.go:89] found id: "421200960de75cfd82828a345a9b9efb813c0d7a8b6726a98b6a19f2269d4e8f"
	I1212 20:13:03.396774  372499 cri.go:89] found id: "f3c43f32965a1f0b8665fbec2a77bedb1d27563d507c1713dafdd55636dca6b0"
	I1212 20:13:03.396778  372499 cri.go:89] found id: "ce0ade5e7b384b601ed7081aad5781b988113fcf2e663c3cda8f56f775acd7f4"
	I1212 20:13:03.396792  372499 cri.go:89] found id: "bdd23d655fa5593c117ddc831b318b9174c685fea69f0f74e8937c0068a303fc"
	I1212 20:13:03.396801  372499 cri.go:89] found id: "dc26db242e241453e7d5ed63563713a3b4816c34b41ff8f939bbb34bbf46b3dd"
	I1212 20:13:03.396804  372499 cri.go:89] found id: "e1266d6c75a1ed8657a1773e0dc06aabee28fd9fae5d73628e13ed933f1c8a22"
	I1212 20:13:03.396820  372499 cri.go:89] found id: "f05b6cd78460f589d8ded390d4e1baf25eeb70e9b75d9b8ba28c586431ef9896"
	I1212 20:13:03.396830  372499 cri.go:89] found id: "d5b835e400afbcb6ec18fda631d1323c4a3a001dd8103d1193776bc98dc28165"
	I1212 20:13:03.396833  372499 cri.go:89] found id: "6b921948e7a2bf30f374cf543193d362a977d27c574b4df270b619f556c268d3"
	I1212 20:13:03.396838  372499 cri.go:89] found id: "389edf543c495e9d3f3ae3b44f4b6b3206037bf3e6d1e64230a715d7bac2658c"
	I1212 20:13:03.396842  372499 cri.go:89] found id: "e4de15886f6710d0e348734b7736caabba99351cda63e47c4906d88355456ec6"
	I1212 20:13:03.396845  372499 cri.go:89] found id: "53fcf67696a942f67b27fc0190bd6dd16c16d9cc7281a626773bc7e94d1a13b6"
	I1212 20:13:03.396848  372499 cri.go:89] found id: ""
	I1212 20:13:03.396928  372499 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 20:13:03.412541  372499 out.go:203] 
	W1212 20:13:03.415439  372499 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:03Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:13:03Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 20:13:03.415467  372499 out.go:285] * 
	* 
	W1212 20:13:03.420745  372499 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:13:03.423745  372499 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-arm64 -p addons-603031 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
E1212 20:22:44.064632  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:23:11.776549  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:36.834301  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:36.840708  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:36.852260  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:36.873734  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:36.915216  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:36.996758  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:37.158353  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:37.480128  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:38.122241  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:39.403733  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:41.966706  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:47.088156  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:24:57.329493  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:17.810890  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:25:58.772701  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:27:20.697515  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:27:44.064557  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m20.873848576s)

                                                
                                                
-- stdout --
	* [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Found network options:
	  - HTTP_PROXY=localhost:43141
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:43141 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-261311 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-261311 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000216488s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095694s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095694s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-linux-arm64 start -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 6 (326.95319ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:28:50.054970  398614 status.go:458] kubeconfig endpoint: get endpoint: "functional-261311" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205528 image save kicbase/echo-server:functional-205528 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image rm kicbase/echo-server:functional-205528 --alsologtostderr                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                                │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                                │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image save --daemon kicbase/echo-server:functional-205528 --alsologtostderr                                                             │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/364853.pem                                                                                                  │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /usr/share/ca-certificates/364853.pem                                                                                      │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/test/nested/copy/364853/hosts                                                                                         │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/3648532.pem                                                                                                 │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /usr/share/ca-certificates/3648532.pem                                                                                     │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format short --alsologtostderr                                                                                               │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format yaml --alsologtostderr                                                                                                │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh pgrep buildkitd                                                                                                                     │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ image          │ functional-205528 image ls --format json --alsologtostderr                                                                                                │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr                                                    │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format table --alsologtostderr                                                                                               │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                                   │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                                   │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                                   │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                                │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ delete         │ -p functional-205528                                                                                                                                      │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ start          │ -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0         │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:20:28
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:20:28.888146  393053 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:20:28.888252  393053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:20:28.888256  393053 out.go:374] Setting ErrFile to fd 2...
	I1212 20:20:28.888259  393053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:20:28.888539  393053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:20:28.888944  393053 out.go:368] Setting JSON to false
	I1212 20:20:28.889764  393053 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10981,"bootTime":1765559848,"procs":157,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:20:28.889828  393053 start.go:143] virtualization:  
	I1212 20:20:28.894927  393053 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:20:28.897753  393053 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:20:28.897870  393053 notify.go:221] Checking for updates...
	I1212 20:20:28.904647  393053 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:20:28.907487  393053 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:20:28.910273  393053 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:20:28.913150  393053 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:20:28.916027  393053 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:20:28.919062  393053 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:20:28.944653  393053 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:20:28.944764  393053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:20:29.011170  393053 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 20:20:28.999045024 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:20:29.011262  393053 docker.go:319] overlay module found
	I1212 20:20:29.014479  393053 out.go:179] * Using the docker driver based on user configuration
	I1212 20:20:29.017300  393053 start.go:309] selected driver: docker
	I1212 20:20:29.017312  393053 start.go:927] validating driver "docker" against <nil>
	I1212 20:20:29.017324  393053 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:20:29.018065  393053 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:20:29.075859  393053 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:43 SystemTime:2025-12-12 20:20:29.065851128 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:20:29.076003  393053 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:20:29.076236  393053 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:20:29.079054  393053 out.go:179] * Using Docker driver with root privileges
	I1212 20:20:29.081943  393053 cni.go:84] Creating CNI manager for ""
	I1212 20:20:29.082012  393053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:20:29.082022  393053 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:20:29.082111  393053 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:20:29.085220  393053 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:20:29.088046  393053 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:20:29.091011  393053 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:20:29.093940  393053 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:20:29.094009  393053 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:20:29.094025  393053 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:20:29.094034  393053 cache.go:65] Caching tarball of preloaded images
	I1212 20:20:29.094128  393053 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:20:29.094145  393053 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:20:29.094556  393053 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:20:29.094578  393053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json: {Name:mk25cf795060b2b1252c3231986d76c48e8ebc69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:29.113229  393053 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:20:29.113240  393053 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:20:29.113259  393053 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:20:29.113290  393053 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:20:29.113417  393053 start.go:364] duration metric: took 98.315µs to acquireMachinesLock for "functional-261311"
	I1212 20:20:29.113446  393053 start.go:93] Provisioning new machine with config: &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:20:29.113526  393053 start.go:125] createHost starting for "" (driver="docker")
	I1212 20:20:29.116802  393053 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1212 20:20:29.117124  393053 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:43141 to docker env.
	I1212 20:20:29.117146  393053 start.go:159] libmachine.API.Create for "functional-261311" (driver="docker")
	I1212 20:20:29.117168  393053 client.go:173] LocalClient.Create starting
	I1212 20:20:29.117248  393053 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem
	I1212 20:20:29.117282  393053 main.go:143] libmachine: Decoding PEM data...
	I1212 20:20:29.117296  393053 main.go:143] libmachine: Parsing certificate...
	I1212 20:20:29.117346  393053 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem
	I1212 20:20:29.117360  393053 main.go:143] libmachine: Decoding PEM data...
	I1212 20:20:29.117370  393053 main.go:143] libmachine: Parsing certificate...
	I1212 20:20:29.117724  393053 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 20:20:29.132748  393053 cli_runner.go:211] docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 20:20:29.132815  393053 network_create.go:284] running [docker network inspect functional-261311] to gather additional debugging logs...
	I1212 20:20:29.132830  393053 cli_runner.go:164] Run: docker network inspect functional-261311
	W1212 20:20:29.148455  393053 cli_runner.go:211] docker network inspect functional-261311 returned with exit code 1
	I1212 20:20:29.148475  393053 network_create.go:287] error running [docker network inspect functional-261311]: docker network inspect functional-261311: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-261311 not found
	I1212 20:20:29.148506  393053 network_create.go:289] output of [docker network inspect functional-261311]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-261311 not found
	
	** /stderr **
	I1212 20:20:29.148608  393053 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:20:29.165765  393053 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018cfdf0}
	I1212 20:20:29.165799  393053 network_create.go:124] attempt to create docker network functional-261311 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1212 20:20:29.165860  393053 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-261311 functional-261311
	I1212 20:20:29.220315  393053 network_create.go:108] docker network functional-261311 192.168.49.0/24 created
	I1212 20:20:29.220338  393053 kic.go:121] calculated static IP "192.168.49.2" for the "functional-261311" container
	I1212 20:20:29.220559  393053 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 20:20:29.236377  393053 cli_runner.go:164] Run: docker volume create functional-261311 --label name.minikube.sigs.k8s.io=functional-261311 --label created_by.minikube.sigs.k8s.io=true
	I1212 20:20:29.254690  393053 oci.go:103] Successfully created a docker volume functional-261311
	I1212 20:20:29.254789  393053 cli_runner.go:164] Run: docker run --rm --name functional-261311-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-261311 --entrypoint /usr/bin/test -v functional-261311:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 20:20:29.808623  393053 oci.go:107] Successfully prepared a docker volume functional-261311
	I1212 20:20:29.808687  393053 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:20:29.808695  393053 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 20:20:29.808761  393053 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-261311:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 20:20:33.826768  393053 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v functional-261311:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.017972432s)
	I1212 20:20:33.826793  393053 kic.go:203] duration metric: took 4.018093377s to extract preloaded images to volume ...
	W1212 20:20:33.826946  393053 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 20:20:33.827055  393053 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 20:20:33.882372  393053 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-261311 --name functional-261311 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-261311 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-261311 --network functional-261311 --ip 192.168.49.2 --volume functional-261311:/var --security-opt apparmor=unconfined --memory=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 20:20:34.197100  393053 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Running}}
	I1212 20:20:34.220145  393053 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:20:34.242570  393053 cli_runner.go:164] Run: docker exec functional-261311 stat /var/lib/dpkg/alternatives/iptables
	I1212 20:20:34.295821  393053 oci.go:144] the created container "functional-261311" has a running status.
	I1212 20:20:34.295840  393053 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa...
	I1212 20:20:34.884625  393053 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 20:20:34.902914  393053 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:20:34.919841  393053 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 20:20:34.919854  393053 kic_runner.go:114] Args: [docker exec --privileged functional-261311 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 20:20:34.963537  393053 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:20:34.980580  393053 machine.go:94] provisionDockerMachine start ...
	I1212 20:20:34.980689  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:34.997962  393053 main.go:143] libmachine: Using SSH client type: native
	I1212 20:20:34.998291  393053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:20:34.998298  393053 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:20:34.998939  393053 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59820->127.0.0.1:33162: read: connection reset by peer
	I1212 20:20:38.160078  393053 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:20:38.160094  393053 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:20:38.160158  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:38.177213  393053 main.go:143] libmachine: Using SSH client type: native
	I1212 20:20:38.177528  393053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:20:38.177536  393053 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:20:38.338189  393053 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:20:38.338259  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:38.356587  393053 main.go:143] libmachine: Using SSH client type: native
	I1212 20:20:38.356897  393053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:20:38.356910  393053 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:20:38.508743  393053 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:20:38.508760  393053 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:20:38.508781  393053 ubuntu.go:190] setting up certificates
	I1212 20:20:38.508789  393053 provision.go:84] configureAuth start
	I1212 20:20:38.508865  393053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:20:38.525981  393053 provision.go:143] copyHostCerts
	I1212 20:20:38.526053  393053 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:20:38.526060  393053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:20:38.526139  393053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:20:38.526238  393053 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:20:38.526242  393053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:20:38.526267  393053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:20:38.526323  393053 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:20:38.526326  393053 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:20:38.526349  393053 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:20:38.526397  393053 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:20:38.619411  393053 provision.go:177] copyRemoteCerts
	I1212 20:20:38.619473  393053 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:20:38.619522  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:38.636815  393053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:20:38.744568  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:20:38.763280  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:20:38.781779  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:20:38.800032  393053 provision.go:87] duration metric: took 291.210584ms to configureAuth
	I1212 20:20:38.800061  393053 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:20:38.800284  393053 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:20:38.800422  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:38.818668  393053 main.go:143] libmachine: Using SSH client type: native
	I1212 20:20:38.818981  393053 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:20:38.818992  393053 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:20:39.127725  393053 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:20:39.127743  393053 machine.go:97] duration metric: took 4.147139055s to provisionDockerMachine
	I1212 20:20:39.127751  393053 client.go:176] duration metric: took 10.010578622s to LocalClient.Create
	I1212 20:20:39.127772  393053 start.go:167] duration metric: took 10.010624457s to libmachine.API.Create "functional-261311"
	I1212 20:20:39.127778  393053 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:20:39.127803  393053 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:20:39.127869  393053 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:20:39.127907  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:39.147611  393053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:20:39.252792  393053 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:20:39.256405  393053 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:20:39.256424  393053 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:20:39.256435  393053 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:20:39.256507  393053 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:20:39.256602  393053 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:20:39.256683  393053 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:20:39.256726  393053 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:20:39.264700  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:20:39.282679  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:20:39.300853  393053 start.go:296] duration metric: took 173.061053ms for postStartSetup
	I1212 20:20:39.301223  393053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:20:39.317942  393053 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:20:39.318202  393053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:20:39.318251  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:39.335684  393053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:20:39.437395  393053 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:20:39.442236  393053 start.go:128] duration metric: took 10.328696095s to createHost
	I1212 20:20:39.442251  393053 start.go:83] releasing machines lock for "functional-261311", held for 10.328826362s
	I1212 20:20:39.442322  393053 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:20:39.463437  393053 out.go:179] * Found network options:
	I1212 20:20:39.466337  393053 out.go:179]   - HTTP_PROXY=localhost:43141
	W1212 20:20:39.469378  393053 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1212 20:20:39.472417  393053 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1212 20:20:39.475392  393053 ssh_runner.go:195] Run: cat /version.json
	I1212 20:20:39.475433  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:39.475495  393053 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:20:39.475556  393053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:20:39.493756  393053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:20:39.500616  393053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:20:39.686039  393053 ssh_runner.go:195] Run: systemctl --version
	I1212 20:20:39.692718  393053 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:20:39.734666  393053 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:20:39.739446  393053 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:20:39.739548  393053 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:20:39.769179  393053 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 20:20:39.769193  393053 start.go:496] detecting cgroup driver to use...
	I1212 20:20:39.769227  393053 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:20:39.769288  393053 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:20:39.789114  393053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:20:39.802554  393053 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:20:39.802609  393053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:20:39.821027  393053 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:20:39.840727  393053 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:20:39.967447  393053 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:20:40.113296  393053 docker.go:234] disabling docker service ...
	I1212 20:20:40.113381  393053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:20:40.139832  393053 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:20:40.155496  393053 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:20:40.291961  393053 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:20:40.421574  393053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:20:40.434317  393053 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:20:40.454523  393053 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:20:40.454579  393053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:40.463805  393053 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:20:40.463877  393053 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:40.473253  393053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:40.481971  393053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:40.490906  393053 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:20:40.499138  393053 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:40.507965  393053 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:40.521468  393053 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:20:40.530533  393053 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:20:40.538149  393053 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:20:40.545523  393053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:20:40.656840  393053 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:20:40.838844  393053 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:20:40.838910  393053 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:20:40.842816  393053 start.go:564] Will wait 60s for crictl version
	I1212 20:20:40.842881  393053 ssh_runner.go:195] Run: which crictl
	I1212 20:20:40.846251  393053 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:20:40.876418  393053 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:20:40.876500  393053 ssh_runner.go:195] Run: crio --version
	I1212 20:20:40.905814  393053 ssh_runner.go:195] Run: crio --version
	I1212 20:20:40.943110  393053 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:20:40.946007  393053 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:20:40.962312  393053 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:20:40.966316  393053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:20:40.976224  393053 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:20:40.976335  393053 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:20:40.976440  393053 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:20:41.015763  393053 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:20:41.015775  393053 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:20:41.015832  393053 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:20:41.041693  393053 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:20:41.041705  393053 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:20:41.041711  393053 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:20:41.041802  393053 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:20:41.041891  393053 ssh_runner.go:195] Run: crio config
	I1212 20:20:41.115444  393053 cni.go:84] Creating CNI manager for ""
	I1212 20:20:41.115455  393053 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:20:41.115468  393053 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:20:41.115492  393053 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:20:41.115620  393053 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:20:41.115691  393053 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:20:41.123782  393053 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:20:41.123845  393053 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:20:41.131929  393053 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:20:41.145638  393053 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:20:41.159474  393053 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 20:20:41.172954  393053 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:20:41.176714  393053 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:20:41.186611  393053 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:20:41.303974  393053 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:20:41.319974  393053 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:20:41.319987  393053 certs.go:195] generating shared ca certs ...
	I1212 20:20:41.320002  393053 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:41.320164  393053 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:20:41.320247  393053 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:20:41.320255  393053 certs.go:257] generating profile certs ...
	I1212 20:20:41.320312  393053 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:20:41.320322  393053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt with IP's: []
	I1212 20:20:41.918934  393053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt ...
	I1212 20:20:41.918953  393053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: {Name:mkc545eddd77b9201d4e209bea179b79f9ad880c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:41.919167  393053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key ...
	I1212 20:20:41.919174  393053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key: {Name:mk76ac2849ba498d385ffa5bd987fdd25dff8566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:41.919269  393053 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:20:41.919280  393053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt.8bc713d7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1212 20:20:42.151095  393053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt.8bc713d7 ...
	I1212 20:20:42.151112  393053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt.8bc713d7: {Name:mk1efc3695370d0be77c3b792803086dddfda89d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:42.151318  393053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7 ...
	I1212 20:20:42.151333  393053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7: {Name:mk7587363ab4be7b11749d88e97ba227526e74df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:42.151423  393053 certs.go:382] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt.8bc713d7 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt
	I1212 20:20:42.151508  393053 certs.go:386] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key
	I1212 20:20:42.151569  393053 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:20:42.151582  393053 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt with IP's: []
	I1212 20:20:42.243164  393053 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt ...
	I1212 20:20:42.243182  393053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt: {Name:mk78f535238fac9b2ecd0ba605dd7603d346308c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:42.243364  393053 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key ...
	I1212 20:20:42.243377  393053 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key: {Name:mk78762762205b9e9d9b1a31b961093b10105fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:20:42.243562  393053 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:20:42.243605  393053 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:20:42.243616  393053 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:20:42.243644  393053 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:20:42.243669  393053 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:20:42.243692  393053 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:20:42.243736  393053 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:20:42.244347  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:20:42.266346  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:20:42.287775  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:20:42.307557  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:20:42.326598  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:20:42.345957  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:20:42.364066  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:20:42.381942  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:20:42.400119  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:20:42.418224  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:20:42.435695  393053 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:20:42.453407  393053 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:20:42.466422  393053 ssh_runner.go:195] Run: openssl version
	I1212 20:20:42.472718  393053 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:20:42.480158  393053 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:20:42.487659  393053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:20:42.491545  393053 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:20:42.491609  393053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:20:42.533904  393053 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:20:42.542858  393053 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/364853.pem /etc/ssl/certs/51391683.0
	I1212 20:20:42.552313  393053 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:20:42.560062  393053 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:20:42.568008  393053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:20:42.572025  393053 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:20:42.572083  393053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:20:42.613485  393053 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:20:42.621094  393053 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3648532.pem /etc/ssl/certs/3ec20f2e.0
	I1212 20:20:42.628688  393053 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:42.636398  393053 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:20:42.644193  393053 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:42.648141  393053 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:42.648206  393053 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:20:42.689562  393053 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:20:42.697253  393053 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1212 20:20:42.704648  393053 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:20:42.708161  393053 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 20:20:42.708221  393053 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:20:42.708291  393053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:20:42.708357  393053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:20:42.736829  393053 cri.go:89] found id: ""
	I1212 20:20:42.736894  393053 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:20:42.744808  393053 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:20:42.752654  393053 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:20:42.752710  393053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:20:42.760423  393053 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:20:42.760433  393053 kubeadm.go:158] found existing configuration files:
	
	I1212 20:20:42.760499  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:20:42.768182  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:20:42.768236  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:20:42.775455  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:20:42.783127  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:20:42.783206  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:20:42.790632  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:20:42.798338  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:20:42.798393  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:20:42.805750  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:20:42.813510  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:20:42.813564  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:20:42.821419  393053 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:20:42.861279  393053 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:20:42.861330  393053 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:20:42.930223  393053 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:20:42.930287  393053 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:20:42.930322  393053 kubeadm.go:319] OS: Linux
	I1212 20:20:42.930366  393053 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:20:42.930413  393053 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:20:42.930459  393053 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:20:42.930506  393053 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:20:42.930552  393053 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:20:42.930599  393053 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:20:42.930652  393053 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:20:42.930701  393053 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:20:42.930746  393053 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:20:42.995935  393053 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:20:42.996038  393053 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:20:42.996128  393053 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:20:43.012775  393053 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:20:43.018279  393053 out.go:252]   - Generating certificates and keys ...
	I1212 20:20:43.018364  393053 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:20:43.018433  393053 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:20:43.172668  393053 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1212 20:20:43.247962  393053 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1212 20:20:43.544684  393053 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1212 20:20:43.903029  393053 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1212 20:20:44.351957  393053 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1212 20:20:44.352286  393053 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-261311 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 20:20:44.716799  393053 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1212 20:20:44.717084  393053 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-261311 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1212 20:20:44.881738  393053 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1212 20:20:45.230891  393053 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1212 20:20:45.993994  393053 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1212 20:20:45.994453  393053 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:20:46.150514  393053 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:20:46.406687  393053 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:20:46.541451  393053 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:20:46.700119  393053 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:20:46.980752  393053 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:20:46.981483  393053 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:20:46.984256  393053 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:20:46.987947  393053 out.go:252]   - Booting up control plane ...
	I1212 20:20:46.988053  393053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:20:46.988162  393053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:20:46.988263  393053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:20:47.006429  393053 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:20:47.006534  393053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:20:47.015251  393053 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:20:47.015548  393053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:20:47.015605  393053 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:20:47.143508  393053 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:20:47.143627  393053 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:24:47.142570  393053 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000216488s
	I1212 20:24:47.142594  393053 kubeadm.go:319] 
	I1212 20:24:47.142696  393053 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:24:47.142770  393053 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:24:47.143023  393053 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:24:47.143032  393053 kubeadm.go:319] 
	I1212 20:24:47.143399  393053 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:24:47.143456  393053 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:24:47.143510  393053 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:24:47.143514  393053 kubeadm.go:319] 
	I1212 20:24:47.148519  393053 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:24:47.148962  393053 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:24:47.149158  393053 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:24:47.149454  393053 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 20:24:47.149474  393053 kubeadm.go:319] 
	I1212 20:24:47.149621  393053 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:24:47.149682  393053 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-261311 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-261311 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000216488s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:24:47.149798  393053 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:24:47.560831  393053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:24:47.573540  393053 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:24:47.573591  393053 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:24:47.581747  393053 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:24:47.581757  393053 kubeadm.go:158] found existing configuration files:
	
	I1212 20:24:47.581806  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:24:47.589662  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:24:47.589718  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:24:47.597127  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:24:47.604946  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:24:47.605000  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:24:47.612602  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:24:47.620627  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:24:47.620681  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:24:47.628442  393053 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:24:47.636106  393053 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:24:47.636161  393053 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:24:47.643985  393053 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:24:47.681645  393053 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:24:47.681698  393053 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:24:47.766432  393053 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:24:47.766500  393053 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:24:47.766534  393053 kubeadm.go:319] OS: Linux
	I1212 20:24:47.766577  393053 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:24:47.766624  393053 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:24:47.766671  393053 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:24:47.766717  393053 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:24:47.766764  393053 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:24:47.766810  393053 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:24:47.766854  393053 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:24:47.766901  393053 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:24:47.766946  393053 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:24:47.844231  393053 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:24:47.844335  393053 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:24:47.844454  393053 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:24:47.852939  393053 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:24:47.858401  393053 out.go:252]   - Generating certificates and keys ...
	I1212 20:24:47.858509  393053 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:24:47.858600  393053 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:24:47.858695  393053 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:24:47.858788  393053 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:24:47.858877  393053 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:24:47.858962  393053 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:24:47.859045  393053 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:24:47.859145  393053 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:24:47.859238  393053 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:24:47.859327  393053 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:24:47.859373  393053 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:24:47.859446  393053 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:24:48.049401  393053 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:24:48.245959  393053 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:24:48.474944  393053 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:24:48.924041  393053 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:24:49.130348  393053 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:24:49.131027  393053 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:24:49.135532  393053 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:24:49.139215  393053 out.go:252]   - Booting up control plane ...
	I1212 20:24:49.139309  393053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:24:49.139382  393053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:24:49.139444  393053 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:24:49.153558  393053 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:24:49.153691  393053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:24:49.161774  393053 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:24:49.162434  393053 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:24:49.162632  393053 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:24:49.292859  393053 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:24:49.292966  393053 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:28:49.289393  393053 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001095694s
	I1212 20:28:49.289751  393053 kubeadm.go:319] 
	I1212 20:28:49.289864  393053 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:28:49.289920  393053 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:28:49.290287  393053 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:28:49.290296  393053 kubeadm.go:319] 
	I1212 20:28:49.290486  393053 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:28:49.290642  393053 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:28:49.290696  393053 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:28:49.290700  393053 kubeadm.go:319] 
	I1212 20:28:49.295625  393053 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:28:49.296077  393053 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:28:49.296191  393053 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:28:49.296460  393053 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:28:49.296465  393053 kubeadm.go:319] 
	I1212 20:28:49.296537  393053 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:28:49.296593  393053 kubeadm.go:403] duration metric: took 8m6.588378856s to StartCluster
	I1212 20:28:49.296633  393053 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:28:49.296699  393053 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:28:49.322359  393053 cri.go:89] found id: ""
	I1212 20:28:49.322384  393053 logs.go:282] 0 containers: []
	W1212 20:28:49.322390  393053 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:28:49.322396  393053 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:28:49.322456  393053 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:28:49.349499  393053 cri.go:89] found id: ""
	I1212 20:28:49.349513  393053 logs.go:282] 0 containers: []
	W1212 20:28:49.349520  393053 logs.go:284] No container was found matching "etcd"
	I1212 20:28:49.349525  393053 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:28:49.349582  393053 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:28:49.378695  393053 cri.go:89] found id: ""
	I1212 20:28:49.378709  393053 logs.go:282] 0 containers: []
	W1212 20:28:49.378715  393053 logs.go:284] No container was found matching "coredns"
	I1212 20:28:49.378720  393053 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:28:49.378776  393053 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:28:49.407090  393053 cri.go:89] found id: ""
	I1212 20:28:49.407104  393053 logs.go:282] 0 containers: []
	W1212 20:28:49.407111  393053 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:28:49.407116  393053 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:28:49.407194  393053 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:28:49.433948  393053 cri.go:89] found id: ""
	I1212 20:28:49.433962  393053 logs.go:282] 0 containers: []
	W1212 20:28:49.433969  393053 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:28:49.433974  393053 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:28:49.434030  393053 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:28:49.457776  393053 cri.go:89] found id: ""
	I1212 20:28:49.457790  393053 logs.go:282] 0 containers: []
	W1212 20:28:49.457797  393053 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:28:49.457802  393053 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:28:49.457863  393053 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:28:49.482608  393053 cri.go:89] found id: ""
	I1212 20:28:49.482629  393053 logs.go:282] 0 containers: []
	W1212 20:28:49.482636  393053 logs.go:284] No container was found matching "kindnet"
	I1212 20:28:49.482644  393053 logs.go:123] Gathering logs for container status ...
	I1212 20:28:49.482653  393053 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:28:49.512222  393053 logs.go:123] Gathering logs for kubelet ...
	I1212 20:28:49.512238  393053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:28:49.577382  393053 logs.go:123] Gathering logs for dmesg ...
	I1212 20:28:49.577401  393053 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:28:49.592168  393053 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:28:49.592185  393053 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:28:49.657764  393053 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:28:49.649228    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.649755    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.651406    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.651813    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.653354    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:28:49.649228    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.649755    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.651406    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.651813    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:49.653354    4858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:28:49.657775  393053 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:28:49.657786  393053 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1212 20:28:49.689860  393053 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095694s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:28:49.689909  393053 out.go:285] * 
	W1212 20:28:49.689985  393053 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095694s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:28:49.690000  393053 out.go:285] * 
	W1212 20:28:49.692175  393053 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:28:49.697986  393053 out.go:203] 
	W1212 20:28:49.701642  393053 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001095694s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:28:49.701690  393053 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:28:49.701710  393053 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:28:49.705297  393053 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833393981Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833435844Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833489965Z" level=info msg="Create NRI interface"
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833594992Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833603968Z" level=info msg="runtime interface created"
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833614709Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.83362092Z" level=info msg="runtime interface starting up..."
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833627534Z" level=info msg="starting plugins..."
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833640276Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:20:40 functional-261311 crio[839]: time="2025-12-12T20:20:40.833710792Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:20:40 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 20:20:43 functional-261311 crio[839]: time="2025-12-12T20:20:42.999678657Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=8fbca7d3-b02b-47bf-83b6-28d4d89ad3d2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:20:43 functional-261311 crio[839]: time="2025-12-12T20:20:43.002850711Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=b97dff3c-b5d9-4de8-a920-125f4dc5e0cf name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:20:43 functional-261311 crio[839]: time="2025-12-12T20:20:43.003815898Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=b5c6e514-65bc-4d50-aac3-e72b129b0df0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:20:43 functional-261311 crio[839]: time="2025-12-12T20:20:43.005537139Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=abdcc215-e166-4554-bff0-6c829a815bc2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:20:43 functional-261311 crio[839]: time="2025-12-12T20:20:43.006200071Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=49f1c7b1-2d32-49c3-afbf-ec79bd5b8093 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:20:43 functional-261311 crio[839]: time="2025-12-12T20:20:43.006924321Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=9a930572-79ab-4c1c-86e1-7cb4fb38300d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:20:43 functional-261311 crio[839]: time="2025-12-12T20:20:43.007752374Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=956751bd-66b3-48f8-b4f6-2b82f19b88af name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:24:47 functional-261311 crio[839]: time="2025-12-12T20:24:47.847955773Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=71a944c8-be54-4b86-9218-9dd6b2f19c92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:24:47 functional-261311 crio[839]: time="2025-12-12T20:24:47.848673533Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=13e390c4-9dab-4d8f-b4d4-f605396b59b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:24:47 functional-261311 crio[839]: time="2025-12-12T20:24:47.849163737Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=98d8e915-5718-4d8f-8c1f-8777395202d1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:24:47 functional-261311 crio[839]: time="2025-12-12T20:24:47.849628275Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=acfc1b88-c596-4d14-a6be-a8f16f4972af name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:24:47 functional-261311 crio[839]: time="2025-12-12T20:24:47.850024152Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=bad4497a-7209-4ce9-af5e-318fc07ff51f name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:24:47 functional-261311 crio[839]: time="2025-12-12T20:24:47.850477941Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=57ef2eef-6458-4e4a-aa03-65e4b3fc4cbe name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:24:47 functional-261311 crio[839]: time="2025-12-12T20:24:47.850917149Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=bd737ea2-8b78-415c-bdbc-1994fbb427c9 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:28:50.668699    4963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:50.669557    4963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:50.671058    4963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:50.671514    4963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:28:50.672987    4963 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:28:50 up  3:11,  0 user,  load average: 0.12, 0.56, 1.27
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:28:48 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:28:48 functional-261311 kubelet[4772]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:28:48 functional-261311 kubelet[4772]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:28:48 functional-261311 kubelet[4772]: E1212 20:28:48.510620    4772 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:28:48 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:28:48 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:28:49 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 646.
	Dec 12 20:28:49 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:28:49 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:28:49 functional-261311 kubelet[4777]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:28:49 functional-261311 kubelet[4777]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:28:49 functional-261311 kubelet[4777]: E1212 20:28:49.250483    4777 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:28:49 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:28:49 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:28:49 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 647.
	Dec 12 20:28:49 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:28:49 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:28:50 functional-261311 kubelet[4871]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:28:50 functional-261311 kubelet[4871]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:28:50 functional-261311 kubelet[4871]: E1212 20:28:50.017278    4871 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:28:50 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:28:50 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:28:50 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 648.
	Dec 12 20:28:50 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:28:50 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 6 (341.905629ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 20:28:51.136471  398835 status.go:458] kubeconfig endpoint: get endpoint: "functional-261311" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (502.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1212 20:28:51.153874  364853 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-261311 --alsologtostderr -v=8
E1212 20:29:36.832041  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:30:04.539471  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:32:44.061658  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:34:07.138604  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:34:36.832521  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-261311 --alsologtostderr -v=8: exit status 80 (6m5.823016425s)

                                                
                                                
-- stdout --
	* [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:28:51.200639  398903 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:28:51.200813  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.200825  398903 out.go:374] Setting ErrFile to fd 2...
	I1212 20:28:51.200844  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.201121  398903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:28:51.201526  398903 out.go:368] Setting JSON to false
	I1212 20:28:51.202423  398903 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11484,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:28:51.202499  398903 start.go:143] virtualization:  
	I1212 20:28:51.205894  398903 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:28:51.209621  398903 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:28:51.209743  398903 notify.go:221] Checking for updates...
	I1212 20:28:51.215382  398903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:28:51.218267  398903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:51.221168  398903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:28:51.224043  398903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:28:51.227018  398903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:28:51.230467  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:51.230581  398903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:28:51.269738  398903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:28:51.269857  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.341809  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.330621143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.341929  398903 docker.go:319] overlay module found
	I1212 20:28:51.347026  398903 out.go:179] * Using the docker driver based on existing profile
	I1212 20:28:51.349898  398903 start.go:309] selected driver: docker
	I1212 20:28:51.349928  398903 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.350015  398903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:28:51.350136  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.408041  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.398420734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.408534  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:51.408600  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:51.408656  398903 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.413511  398903 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:28:51.416491  398903 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:28:51.419403  398903 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:28:51.422306  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:51.422357  398903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:28:51.422368  398903 cache.go:65] Caching tarball of preloaded images
	I1212 20:28:51.422458  398903 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:28:51.422471  398903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:28:51.422591  398903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:28:51.422818  398903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:28:51.441630  398903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:28:51.441653  398903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:28:51.441676  398903 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:28:51.441708  398903 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:28:51.441778  398903 start.go:364] duration metric: took 45.9µs to acquireMachinesLock for "functional-261311"
	I1212 20:28:51.441803  398903 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:28:51.441812  398903 fix.go:54] fixHost starting: 
	I1212 20:28:51.442073  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:51.469956  398903 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:28:51.469989  398903 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:28:51.473238  398903 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:28:51.473304  398903 machine.go:94] provisionDockerMachine start ...
	I1212 20:28:51.473396  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.494630  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.494961  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.494976  398903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:28:51.648147  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.648174  398903 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:28:51.648237  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.668778  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.669090  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.669106  398903 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:28:51.829776  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.829853  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.848648  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.848971  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.848987  398903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:28:52.002627  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:28:52.002659  398903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:28:52.002689  398903 ubuntu.go:190] setting up certificates
	I1212 20:28:52.002713  398903 provision.go:84] configureAuth start
	I1212 20:28:52.002795  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:52.023958  398903 provision.go:143] copyHostCerts
	I1212 20:28:52.024006  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024050  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:28:52.024064  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024145  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:28:52.024243  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024271  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:28:52.024280  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024310  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:28:52.024357  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024421  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:28:52.024431  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024463  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:28:52.024521  398903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:28:52.567706  398903 provision.go:177] copyRemoteCerts
	I1212 20:28:52.567776  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:28:52.567821  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.585858  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:52.692768  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:28:52.692828  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:28:52.711466  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:28:52.711534  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:28:52.730742  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:28:52.730815  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:28:52.749109  398903 provision.go:87] duration metric: took 746.363484ms to configureAuth
	I1212 20:28:52.749138  398903 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:28:52.749373  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:52.749480  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.767233  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:52.767548  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:52.767570  398903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:28:53.124031  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:28:53.124063  398903 machine.go:97] duration metric: took 1.650735569s to provisionDockerMachine
	I1212 20:28:53.124076  398903 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:28:53.124090  398903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:28:53.124184  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:28:53.124249  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.144150  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.248393  398903 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:28:53.251578  398903 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 20:28:53.251600  398903 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 20:28:53.251605  398903 command_runner.go:130] > VERSION_ID="12"
	I1212 20:28:53.251610  398903 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 20:28:53.251614  398903 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 20:28:53.251618  398903 command_runner.go:130] > ID=debian
	I1212 20:28:53.251623  398903 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 20:28:53.251629  398903 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 20:28:53.251634  398903 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 20:28:53.251713  398903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:28:53.251736  398903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:28:53.251748  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:28:53.251809  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:28:53.251889  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:28:53.251900  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:28:53.251976  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:28:53.251984  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> /etc/test/nested/copy/364853/hosts
	I1212 20:28:53.252026  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:28:53.259320  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:53.277130  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:28:53.294238  398903 start.go:296] duration metric: took 170.145848ms for postStartSetup
	I1212 20:28:53.294390  398903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:28:53.294470  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.312603  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.412930  398903 command_runner.go:130] > 11%
	I1212 20:28:53.413464  398903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:28:53.417828  398903 command_runner.go:130] > 174G
	I1212 20:28:53.418334  398903 fix.go:56] duration metric: took 1.976518079s for fixHost
	I1212 20:28:53.418383  398903 start.go:83] releasing machines lock for "functional-261311", held for 1.976583573s
	I1212 20:28:53.418465  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:53.435134  398903 ssh_runner.go:195] Run: cat /version.json
	I1212 20:28:53.435190  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.435445  398903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:28:53.435511  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.452987  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.462005  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.555880  398903 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 20:28:53.556060  398903 ssh_runner.go:195] Run: systemctl --version
	I1212 20:28:53.643428  398903 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:28:53.646219  398903 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 20:28:53.646272  398903 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 20:28:53.646362  398903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:28:53.685489  398903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:28:53.690919  398903 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:28:53.690960  398903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:28:53.691016  398903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:28:53.699790  398903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:28:53.699851  398903 start.go:496] detecting cgroup driver to use...
	I1212 20:28:53.699883  398903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:28:53.699937  398903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:28:53.716256  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:28:53.731380  398903 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:28:53.731442  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:28:53.747947  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:28:53.763704  398903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:28:53.877723  398903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:28:53.997385  398903 docker.go:234] disabling docker service ...
	I1212 20:28:53.997457  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:28:54.016313  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:28:54.032112  398903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:28:54.157667  398903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:28:54.273189  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:28:54.288211  398903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:28:54.301284  398903 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:28:54.302509  398903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:28:54.302613  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.311343  398903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:28:54.311460  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.320776  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.330058  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.340191  398903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:28:54.348326  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.357164  398903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.365464  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.374528  398903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:28:54.381778  398903 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 20:28:54.382795  398903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:28:54.390360  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:54.529224  398903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:28:54.703666  398903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:28:54.703740  398903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:28:54.707780  398903 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:28:54.707808  398903 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:28:54.707826  398903 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1212 20:28:54.707834  398903 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:54.707840  398903 command_runner.go:130] > Access: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707850  398903 command_runner.go:130] > Modify: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707858  398903 command_runner.go:130] > Change: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707861  398903 command_runner.go:130] >  Birth: -
	I1212 20:28:54.707934  398903 start.go:564] Will wait 60s for crictl version
	I1212 20:28:54.708017  398903 ssh_runner.go:195] Run: which crictl
	I1212 20:28:54.711729  398903 command_runner.go:130] > /usr/local/bin/crictl
	I1212 20:28:54.711909  398903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:28:54.737852  398903 command_runner.go:130] > Version:  0.1.0
	I1212 20:28:54.737888  398903 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:28:54.737895  398903 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 20:28:54.737901  398903 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:28:54.740042  398903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:28:54.740184  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.769676  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.769713  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.769720  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.769725  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.769750  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.769764  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.769768  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.769788  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.769802  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.769806  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.769810  398903 command_runner.go:130] >      static
	I1212 20:28:54.769813  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.769832  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.769838  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.769842  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.769849  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.769852  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.769859  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.769867  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.769872  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.769969  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.796781  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.796850  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.796873  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.796896  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.796933  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.796961  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.796982  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.797005  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.797036  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.797055  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.797071  398903 command_runner.go:130] >      static
	I1212 20:28:54.797089  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.797108  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.797151  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.797177  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.797197  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.797231  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.797262  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.797290  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.797309  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.804038  398903 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:28:54.806949  398903 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:28:54.823441  398903 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:28:54.827623  398903 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 20:28:54.827865  398903 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:28:54.827977  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:54.828031  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.860175  398903 command_runner.go:130] > {
	I1212 20:28:54.860197  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.860201  398903 command_runner.go:130] >     {
	I1212 20:28:54.860214  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.860219  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860225  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.860229  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860233  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860242  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.860250  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.860254  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860258  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.860263  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860270  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860274  398903 command_runner.go:130] >     },
	I1212 20:28:54.860277  398903 command_runner.go:130] >     {
	I1212 20:28:54.860285  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.860289  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860295  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.860298  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860302  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860310  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.860333  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.860341  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860346  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.860350  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860357  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860360  398903 command_runner.go:130] >     },
	I1212 20:28:54.860363  398903 command_runner.go:130] >     {
	I1212 20:28:54.860391  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.860396  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860401  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.860404  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860408  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860417  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.860425  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.860428  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860434  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.860439  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.860443  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860447  398903 command_runner.go:130] >     },
	I1212 20:28:54.860456  398903 command_runner.go:130] >     {
	I1212 20:28:54.860463  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.860467  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860472  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.860478  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860482  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860490  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.860497  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.860505  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860510  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.860513  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860517  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860521  398903 command_runner.go:130] >       },
	I1212 20:28:54.860530  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860534  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860540  398903 command_runner.go:130] >     },
	I1212 20:28:54.860546  398903 command_runner.go:130] >     {
	I1212 20:28:54.860552  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.860558  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860564  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.860567  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860577  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860594  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.860603  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.860610  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860614  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.860618  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860622  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860625  398903 command_runner.go:130] >       },
	I1212 20:28:54.860630  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860636  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860639  398903 command_runner.go:130] >     },
	I1212 20:28:54.860643  398903 command_runner.go:130] >     {
	I1212 20:28:54.860652  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.860659  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860665  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.860668  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860672  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860684  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.860695  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.860698  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860702  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.860706  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860711  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860717  398903 command_runner.go:130] >       },
	I1212 20:28:54.860721  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860726  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860739  398903 command_runner.go:130] >     },
	I1212 20:28:54.860747  398903 command_runner.go:130] >     {
	I1212 20:28:54.860754  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.860760  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860766  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.860769  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860773  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860781  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.860792  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.860796  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860801  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.860807  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860811  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860817  398903 command_runner.go:130] >     },
	I1212 20:28:54.860820  398903 command_runner.go:130] >     {
	I1212 20:28:54.860827  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.860831  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860839  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.860844  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860854  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860863  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.860876  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.860883  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860887  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.860891  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860895  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860905  398903 command_runner.go:130] >       },
	I1212 20:28:54.860908  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860912  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860922  398903 command_runner.go:130] >     },
	I1212 20:28:54.860925  398903 command_runner.go:130] >     {
	I1212 20:28:54.860932  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.860938  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860944  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.860948  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860953  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860961  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.860971  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.860975  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860979  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.860984  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860991  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.860994  398903 command_runner.go:130] >       },
	I1212 20:28:54.861000  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.861004  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.861014  398903 command_runner.go:130] >     }
	I1212 20:28:54.861017  398903 command_runner.go:130] >   ]
	I1212 20:28:54.861020  398903 command_runner.go:130] > }
	I1212 20:28:54.861204  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.861218  398903 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:28:54.861275  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.883482  398903 command_runner.go:130] > {
	I1212 20:28:54.883501  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.883506  398903 command_runner.go:130] >     {
	I1212 20:28:54.883514  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.883520  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883526  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.883529  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883533  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883547  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.883556  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.883560  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883564  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.883568  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883574  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883577  398903 command_runner.go:130] >     },
	I1212 20:28:54.883580  398903 command_runner.go:130] >     {
	I1212 20:28:54.883587  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.883591  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883597  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.883600  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883604  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883612  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.883620  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.883624  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883628  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.883632  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883638  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883641  398903 command_runner.go:130] >     },
	I1212 20:28:54.883645  398903 command_runner.go:130] >     {
	I1212 20:28:54.883652  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.883656  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883663  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.883666  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883670  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883679  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.883687  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.883690  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883695  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.883699  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.883702  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883706  398903 command_runner.go:130] >     },
	I1212 20:28:54.883712  398903 command_runner.go:130] >     {
	I1212 20:28:54.883719  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.883723  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883728  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.883733  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883737  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883745  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.883752  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.883756  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883759  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.883763  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883767  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883770  398903 command_runner.go:130] >       },
	I1212 20:28:54.883778  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883783  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883786  398903 command_runner.go:130] >     },
	I1212 20:28:54.883788  398903 command_runner.go:130] >     {
	I1212 20:28:54.883795  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.883798  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883804  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.883807  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883811  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883819  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.883827  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.883830  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883834  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.883838  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883842  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883845  398903 command_runner.go:130] >       },
	I1212 20:28:54.883854  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883858  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883861  398903 command_runner.go:130] >     },
	I1212 20:28:54.883864  398903 command_runner.go:130] >     {
	I1212 20:28:54.883874  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.883878  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883884  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.883888  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883891  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883899  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.883908  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.883911  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883915  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.883919  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883923  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883926  398903 command_runner.go:130] >       },
	I1212 20:28:54.883930  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883935  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883938  398903 command_runner.go:130] >     },
	I1212 20:28:54.883942  398903 command_runner.go:130] >     {
	I1212 20:28:54.883949  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.883952  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883958  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.883961  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883965  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883973  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.883981  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.883983  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883988  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.883991  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883995  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883999  398903 command_runner.go:130] >     },
	I1212 20:28:54.884002  398903 command_runner.go:130] >     {
	I1212 20:28:54.884008  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.884012  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884017  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.884020  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884030  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884038  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.884055  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.884061  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884064  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.884068  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884072  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.884075  398903 command_runner.go:130] >       },
	I1212 20:28:54.884079  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884082  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.884085  398903 command_runner.go:130] >     },
	I1212 20:28:54.884088  398903 command_runner.go:130] >     {
	I1212 20:28:54.884095  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.884099  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884103  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.884106  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884110  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884118  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.884125  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.884129  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884133  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.884137  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884141  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.884145  398903 command_runner.go:130] >       },
	I1212 20:28:54.884149  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884152  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.884155  398903 command_runner.go:130] >     }
	I1212 20:28:54.884158  398903 command_runner.go:130] >   ]
	I1212 20:28:54.884161  398903 command_runner.go:130] > }
	I1212 20:28:54.885632  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.885655  398903 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:28:54.885663  398903 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:28:54.885778  398903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:28:54.885868  398903 ssh_runner.go:195] Run: crio config
	I1212 20:28:54.934221  398903 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:28:54.934247  398903 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:28:54.934255  398903 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:28:54.934259  398903 command_runner.go:130] > #
	I1212 20:28:54.934288  398903 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:28:54.934303  398903 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:28:54.934310  398903 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:28:54.934320  398903 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:28:54.934324  398903 command_runner.go:130] > # reload'.
	I1212 20:28:54.934331  398903 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:28:54.934341  398903 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:28:54.934347  398903 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:28:54.934369  398903 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:28:54.934379  398903 command_runner.go:130] > [crio]
	I1212 20:28:54.934386  398903 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:28:54.934403  398903 command_runner.go:130] > # containers images, in this directory.
	I1212 20:28:54.934708  398903 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 20:28:54.934725  398903 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:28:54.935118  398903 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 20:28:54.935167  398903 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 20:28:54.935270  398903 command_runner.go:130] > # imagestore = ""
	I1212 20:28:54.935280  398903 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:28:54.935288  398903 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:28:54.935534  398903 command_runner.go:130] > # storage_driver = "overlay"
	I1212 20:28:54.935547  398903 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:28:54.935554  398903 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:28:54.935682  398903 command_runner.go:130] > # storage_option = [
	I1212 20:28:54.935790  398903 command_runner.go:130] > # ]
	I1212 20:28:54.935801  398903 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:28:54.935808  398903 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:28:54.935977  398903 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:28:54.935987  398903 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:28:54.936004  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:28:54.936009  398903 command_runner.go:130] > # always happen on a node reboot
	I1212 20:28:54.936228  398903 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:28:54.936250  398903 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:28:54.936257  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:28:54.936263  398903 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:28:54.936389  398903 command_runner.go:130] > # version_file_persist = ""
	I1212 20:28:54.936402  398903 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:28:54.936411  398903 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:28:54.937698  398903 command_runner.go:130] > # internal_wipe = true
	I1212 20:28:54.937721  398903 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 20:28:54.937728  398903 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 20:28:54.937860  398903 command_runner.go:130] > # internal_repair = true
	I1212 20:28:54.937871  398903 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:28:54.937878  398903 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:28:54.937885  398903 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:28:54.938097  398903 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:28:54.938132  398903 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:28:54.938152  398903 command_runner.go:130] > [crio.api]
	I1212 20:28:54.938172  398903 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:28:54.938284  398903 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:28:54.938314  398903 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:28:54.938521  398903 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:28:54.938555  398903 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:28:54.938577  398903 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:28:54.938680  398903 command_runner.go:130] > # stream_port = "0"
	I1212 20:28:54.938717  398903 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:28:54.938951  398903 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:28:54.938995  398903 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:28:54.939084  398903 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:28:54.939113  398903 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:28:54.939142  398903 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939249  398903 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:28:54.939291  398903 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:28:54.939312  398903 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939622  398903 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:28:54.939657  398903 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:28:54.939704  398903 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:28:54.939736  398903 command_runner.go:130] > # automatically pick up the changes.
	I1212 20:28:54.939811  398903 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:28:54.939858  398903 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940308  398903 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 20:28:54.940353  398903 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940776  398903 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 20:28:54.940788  398903 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:28:54.940801  398903 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:28:54.940806  398903 command_runner.go:130] > [crio.runtime]
	I1212 20:28:54.940824  398903 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:28:54.940830  398903 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:28:54.940834  398903 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:28:54.940840  398903 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:28:54.940969  398903 command_runner.go:130] > # default_ulimits = [
	I1212 20:28:54.941191  398903 command_runner.go:130] > # ]
	I1212 20:28:54.941204  398903 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:28:54.941558  398903 command_runner.go:130] > # no_pivot = false
	I1212 20:28:54.941568  398903 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:28:54.941575  398903 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:28:54.941945  398903 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:28:54.941956  398903 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:28:54.941961  398903 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:28:54.942013  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942279  398903 command_runner.go:130] > # conmon = ""
	I1212 20:28:54.942287  398903 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:28:54.942295  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:28:54.942500  398903 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:28:54.942511  398903 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:28:54.942545  398903 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:28:54.942582  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942706  398903 command_runner.go:130] > # conmon_env = [
	I1212 20:28:54.942961  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943022  398903 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:28:54.943043  398903 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:28:54.943084  398903 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:28:54.943203  398903 command_runner.go:130] > # default_env = [
	I1212 20:28:54.943456  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943514  398903 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:28:54.943537  398903 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 20:28:54.943931  398903 command_runner.go:130] > # selinux = false
	I1212 20:28:54.943943  398903 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:28:54.943997  398903 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 20:28:54.944007  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944219  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.944231  398903 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 20:28:54.944237  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944517  398903 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 20:28:54.944529  398903 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:28:54.944536  398903 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:28:54.944595  398903 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:28:54.944603  398903 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:28:54.944609  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944908  398903 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:28:54.944919  398903 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:28:54.944924  398903 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:28:54.945253  398903 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:28:54.945265  398903 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 20:28:54.945309  398903 command_runner.go:130] > # blockio parameters.
	I1212 20:28:54.945663  398903 command_runner.go:130] > # blockio_reload = false
	I1212 20:28:54.945676  398903 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:28:54.945725  398903 command_runner.go:130] > # irqbalance daemon.
	I1212 20:28:54.946100  398903 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:28:54.946111  398903 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 20:28:54.946174  398903 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 20:28:54.946186  398903 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 20:28:54.946547  398903 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 20:28:54.946561  398903 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:28:54.946567  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.946867  398903 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:28:54.946878  398903 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:28:54.947089  398903 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:28:54.947100  398903 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:28:54.947442  398903 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:28:54.947454  398903 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:28:54.947513  398903 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:28:54.947527  398903 command_runner.go:130] > # will be added.
	I1212 20:28:54.947601  398903 command_runner.go:130] > # default_capabilities = [
	I1212 20:28:54.947867  398903 command_runner.go:130] > # 	"CHOWN",
	I1212 20:28:54.948094  398903 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:28:54.948277  398903 command_runner.go:130] > # 	"FSETID",
	I1212 20:28:54.948500  398903 command_runner.go:130] > # 	"FOWNER",
	I1212 20:28:54.948701  398903 command_runner.go:130] > # 	"SETGID",
	I1212 20:28:54.948883  398903 command_runner.go:130] > # 	"SETUID",
	I1212 20:28:54.949109  398903 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:28:54.949307  398903 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:28:54.949502  398903 command_runner.go:130] > # 	"KILL",
	I1212 20:28:54.949671  398903 command_runner.go:130] > # ]
	I1212 20:28:54.949741  398903 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 20:28:54.949814  398903 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 20:28:54.950073  398903 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 20:28:54.950143  398903 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:28:54.950211  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.950289  398903 command_runner.go:130] > default_sysctls = [
	I1212 20:28:54.950330  398903 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 20:28:54.950370  398903 command_runner.go:130] > ]
	I1212 20:28:54.950439  398903 command_runner.go:130] > # List of devices on the host that a
	I1212 20:28:54.950465  398903 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:28:54.950518  398903 command_runner.go:130] > # allowed_devices = [
	I1212 20:28:54.950672  398903 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:28:54.950902  398903 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 20:28:54.951150  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951221  398903 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:28:54.951244  398903 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:28:54.951280  398903 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:28:54.951306  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.951324  398903 command_runner.go:130] > # additional_devices = [
	I1212 20:28:54.951343  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951424  398903 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:28:54.951503  398903 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:28:54.951521  398903 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:28:54.951592  398903 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:28:54.951609  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951651  398903 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:28:54.951672  398903 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:28:54.951689  398903 command_runner.go:130] > # Defaults to false.
	I1212 20:28:54.951751  398903 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:28:54.951809  398903 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:28:54.951879  398903 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:28:54.951906  398903 command_runner.go:130] > # hooks_dir = [
	I1212 20:28:54.951934  398903 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:28:54.951952  398903 command_runner.go:130] > # ]
	I1212 20:28:54.952010  398903 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:28:54.952049  398903 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:28:54.952097  398903 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:28:54.952138  398903 command_runner.go:130] > #
	I1212 20:28:54.952160  398903 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:28:54.952191  398903 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:28:54.952262  398903 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:28:54.952281  398903 command_runner.go:130] > #
	I1212 20:28:54.952324  398903 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:28:54.952346  398903 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:28:54.952404  398903 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:28:54.952491  398903 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:28:54.952529  398903 command_runner.go:130] > #
	I1212 20:28:54.952568  398903 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:28:54.952602  398903 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:28:54.952623  398903 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:28:54.952643  398903 command_runner.go:130] > # pids_limit = -1
	I1212 20:28:54.952677  398903 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:28:54.952708  398903 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:28:54.952837  398903 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:28:54.952892  398903 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:28:54.952911  398903 command_runner.go:130] > # log_size_max = -1
	I1212 20:28:54.952955  398903 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 20:28:54.953009  398903 command_runner.go:130] > # log_to_journald = false
	I1212 20:28:54.953062  398903 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:28:54.953088  398903 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:28:54.953123  398903 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:28:54.953149  398903 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:28:54.953170  398903 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:28:54.953206  398903 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:28:54.953299  398903 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:28:54.953339  398903 command_runner.go:130] > # read_only = false
	I1212 20:28:54.953359  398903 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:28:54.953395  398903 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:28:54.953418  398903 command_runner.go:130] > # live configuration reload.
	I1212 20:28:54.953436  398903 command_runner.go:130] > # log_level = "info"
	I1212 20:28:54.953472  398903 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:28:54.953562  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.953601  398903 command_runner.go:130] > # log_filter = ""
	I1212 20:28:54.953622  398903 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953643  398903 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:28:54.953675  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953712  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953763  398903 command_runner.go:130] > # uid_mappings = ""
	I1212 20:28:54.953804  398903 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953825  398903 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:28:54.953843  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953907  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953931  398903 command_runner.go:130] > # gid_mappings = ""
	I1212 20:28:54.953969  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:28:54.954021  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954062  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954085  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954103  398903 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:28:54.954162  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:28:54.954184  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954234  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954322  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954363  398903 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:28:54.954382  398903 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:28:54.954423  398903 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:28:54.954443  398903 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:28:54.954461  398903 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:28:54.954533  398903 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:28:54.954586  398903 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:28:54.954623  398903 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:28:54.954643  398903 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:28:54.954683  398903 command_runner.go:130] > # drop_infra_ctr = true
	I1212 20:28:54.954704  398903 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:28:54.954737  398903 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:28:54.954797  398903 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:28:54.954876  398903 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:28:54.954917  398903 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 20:28:54.954947  398903 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 20:28:54.954967  398903 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 20:28:54.955001  398903 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 20:28:54.955088  398903 command_runner.go:130] > # shared_cpuset = ""
	I1212 20:28:54.955124  398903 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:28:54.955160  398903 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:28:54.955179  398903 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:28:54.955201  398903 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:28:54.955242  398903 command_runner.go:130] > # pinns_path = ""
	I1212 20:28:54.955301  398903 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 20:28:54.955365  398903 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 20:28:54.955383  398903 command_runner.go:130] > # enable_criu_support = true
	I1212 20:28:54.955425  398903 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 20:28:54.955447  398903 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 20:28:54.955466  398903 command_runner.go:130] > # enable_pod_events = false
	I1212 20:28:54.955506  398903 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:28:54.955594  398903 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 20:28:54.955624  398903 command_runner.go:130] > # default_runtime = "crun"
	I1212 20:28:54.955661  398903 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:28:54.955697  398903 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:28:54.955721  398903 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 20:28:54.955790  398903 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:28:54.955868  398903 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:28:54.955891  398903 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:28:54.955927  398903 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:28:54.955946  398903 command_runner.go:130] > # ]
	I1212 20:28:54.955966  398903 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:28:54.956007  398903 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:28:54.956057  398903 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 20:28:54.956117  398903 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 20:28:54.956136  398903 command_runner.go:130] > #
	I1212 20:28:54.956299  398903 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 20:28:54.956391  398903 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 20:28:54.956423  398903 command_runner.go:130] > # runtime_type = "oci"
	I1212 20:28:54.956443  398903 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 20:28:54.956476  398903 command_runner.go:130] > # inherit_default_runtime = false
	I1212 20:28:54.956515  398903 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 20:28:54.956535  398903 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 20:28:54.956555  398903 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 20:28:54.956602  398903 command_runner.go:130] > # monitor_env = []
	I1212 20:28:54.956632  398903 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 20:28:54.956651  398903 command_runner.go:130] > # allowed_annotations = []
	I1212 20:28:54.956673  398903 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 20:28:54.956703  398903 command_runner.go:130] > # no_sync_log = false
	I1212 20:28:54.956730  398903 command_runner.go:130] > # default_annotations = {}
	I1212 20:28:54.956749  398903 command_runner.go:130] > # stream_websockets = false
	I1212 20:28:54.956770  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.956828  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.956858  398903 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 20:28:54.956879  398903 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 20:28:54.956902  398903 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:28:54.956934  398903 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:28:54.956956  398903 command_runner.go:130] > #   in $PATH.
	I1212 20:28:54.956979  398903 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 20:28:54.957012  398903 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:28:54.957045  398903 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 20:28:54.957066  398903 command_runner.go:130] > #   state.
	I1212 20:28:54.957088  398903 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:28:54.957122  398903 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:28:54.957146  398903 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 20:28:54.957169  398903 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 20:28:54.957202  398903 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 20:28:54.957227  398903 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:28:54.957250  398903 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:28:54.957281  398903 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:28:54.957305  398903 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:28:54.957327  398903 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:28:54.957359  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:28:54.957385  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:28:54.957408  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:28:54.957450  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:28:54.957471  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:28:54.957498  398903 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:28:54.957534  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 20:28:54.957557  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 20:28:54.957580  398903 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:28:54.957613  398903 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 20:28:54.957636  398903 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 20:28:54.957657  398903 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 20:28:54.957689  398903 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 20:28:54.957712  398903 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 20:28:54.957733  398903 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 20:28:54.957769  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 20:28:54.957795  398903 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 20:28:54.957816  398903 command_runner.go:130] > #   deprecated option "conmon".
	I1212 20:28:54.957848  398903 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 20:28:54.957870  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 20:28:54.957893  398903 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 20:28:54.957923  398903 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:28:54.957949  398903 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 20:28:54.957971  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 20:28:54.958007  398903 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 20:28:54.958030  398903 command_runner.go:130] > #   conmon-rs by using:
	I1212 20:28:54.958053  398903 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 20:28:54.958092  398903 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 20:28:54.958133  398903 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 20:28:54.958204  398903 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 20:28:54.958225  398903 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 20:28:54.958278  398903 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 20:28:54.958303  398903 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 20:28:54.958340  398903 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 20:28:54.958372  398903 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 20:28:54.958415  398903 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 20:28:54.958449  398903 command_runner.go:130] > #   when a machine crash happens.
	I1212 20:28:54.958472  398903 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 20:28:54.958496  398903 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 20:28:54.958530  398903 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 20:28:54.958560  398903 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 20:28:54.958583  398903 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 20:28:54.958606  398903 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 20:28:54.958635  398903 command_runner.go:130] > #
	I1212 20:28:54.958656  398903 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 20:28:54.958676  398903 command_runner.go:130] > #
	I1212 20:28:54.958708  398903 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 20:28:54.958738  398903 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 20:28:54.958756  398903 command_runner.go:130] > #
	I1212 20:28:54.958778  398903 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 20:28:54.958809  398903 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 20:28:54.958834  398903 command_runner.go:130] > #
	I1212 20:28:54.958854  398903 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 20:28:54.958874  398903 command_runner.go:130] > # feature.
	I1212 20:28:54.958903  398903 command_runner.go:130] > #
	I1212 20:28:54.958934  398903 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 20:28:54.958955  398903 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 20:28:54.958978  398903 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 20:28:54.959015  398903 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 20:28:54.959041  398903 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 20:28:54.959060  398903 command_runner.go:130] > #
	I1212 20:28:54.959092  398903 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 20:28:54.959116  398903 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 20:28:54.959135  398903 command_runner.go:130] > #
	I1212 20:28:54.959166  398903 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 20:28:54.959195  398903 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 20:28:54.959213  398903 command_runner.go:130] > #
	I1212 20:28:54.959234  398903 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 20:28:54.959264  398903 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 20:28:54.959290  398903 command_runner.go:130] > # limitation.
	I1212 20:28:54.959309  398903 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 20:28:54.959329  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 20:28:54.959363  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959390  398903 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 20:28:54.959409  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959429  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959460  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959486  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959503  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959521  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959541  398903 command_runner.go:130] > allowed_annotations = [
	I1212 20:28:54.959574  398903 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 20:28:54.959593  398903 command_runner.go:130] > ]
	I1212 20:28:54.959612  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959644  398903 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:28:54.959671  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 20:28:54.959688  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959705  398903 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:28:54.959727  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959762  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959780  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959800  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959819  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959855  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959872  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959894  398903 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:28:54.959924  398903 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:28:54.959953  398903 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:28:54.959976  398903 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:28:54.960002  398903 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 20:28:54.960047  398903 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 20:28:54.960072  398903 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 20:28:54.960106  398903 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:28:54.960135  398903 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:28:54.960156  398903 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:28:54.960176  398903 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:28:54.960207  398903 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:28:54.960236  398903 command_runner.go:130] > # Example:
	I1212 20:28:54.960257  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:28:54.960281  398903 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:28:54.960315  398903 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:28:54.960337  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:28:54.960356  398903 command_runner.go:130] > # cpuset = "0-1"
	I1212 20:28:54.960392  398903 command_runner.go:130] > # cpushares = "5"
	I1212 20:28:54.960413  398903 command_runner.go:130] > # cpuquota = "1000"
	I1212 20:28:54.960435  398903 command_runner.go:130] > # cpuperiod = "100000"
	I1212 20:28:54.960473  398903 command_runner.go:130] > # cpulimit = "35"
	I1212 20:28:54.960495  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.960507  398903 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:28:54.960516  398903 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:28:54.960522  398903 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:28:54.960542  398903 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:28:54.960555  398903 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:28:54.960563  398903 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:28:54.960568  398903 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 20:28:54.960575  398903 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 20:28:54.960579  398903 command_runner.go:130] > # Default value is set to true
	I1212 20:28:54.960595  398903 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 20:28:54.960602  398903 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 20:28:54.960613  398903 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 20:28:54.960618  398903 command_runner.go:130] > # Default value is set to 'false'
	I1212 20:28:54.960623  398903 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 20:28:54.960637  398903 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 20:28:54.960645  398903 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 20:28:54.960649  398903 command_runner.go:130] > # timezone = ""
	I1212 20:28:54.960656  398903 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:28:54.960661  398903 command_runner.go:130] > #
	I1212 20:28:54.960668  398903 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:28:54.960675  398903 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 20:28:54.960682  398903 command_runner.go:130] > [crio.image]
	I1212 20:28:54.960688  398903 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:28:54.960693  398903 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:28:54.960702  398903 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:28:54.960714  398903 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960719  398903 command_runner.go:130] > # global_auth_file = ""
	I1212 20:28:54.960724  398903 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:28:54.960730  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960738  398903 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.960745  398903 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:28:54.960758  398903 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960764  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960770  398903 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:28:54.960777  398903 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:28:54.960783  398903 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:28:54.960793  398903 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:28:54.960800  398903 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:28:54.960804  398903 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:28:54.960810  398903 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 20:28:54.960819  398903 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 20:28:54.960828  398903 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 20:28:54.960837  398903 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 20:28:54.960843  398903 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 20:28:54.960855  398903 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 20:28:54.960859  398903 command_runner.go:130] > # pinned_images = [
	I1212 20:28:54.960863  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960869  398903 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:28:54.960879  398903 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:28:54.960885  398903 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:28:54.960891  398903 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:28:54.960902  398903 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:28:54.960910  398903 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 20:28:54.960916  398903 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 20:28:54.960923  398903 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 20:28:54.960933  398903 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 20:28:54.960939  398903 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 20:28:54.960948  398903 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 20:28:54.960953  398903 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 20:28:54.960960  398903 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:28:54.960969  398903 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:28:54.960973  398903 command_runner.go:130] > # changing them here.
	I1212 20:28:54.960979  398903 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 20:28:54.960983  398903 command_runner.go:130] > # insecure_registries = [
	I1212 20:28:54.960986  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960995  398903 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:28:54.961006  398903 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:28:54.961012  398903 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:28:54.961020  398903 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:28:54.961026  398903 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:28:54.961032  398903 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 20:28:54.961042  398903 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 20:28:54.961046  398903 command_runner.go:130] > # auto_reload_registries = false
	I1212 20:28:54.961054  398903 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 20:28:54.961062  398903 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 20:28:54.961069  398903 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 20:28:54.961077  398903 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 20:28:54.961082  398903 command_runner.go:130] > # The mode of short name resolution.
	I1212 20:28:54.961089  398903 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 20:28:54.961100  398903 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 20:28:54.961105  398903 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 20:28:54.961112  398903 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 20:28:54.961118  398903 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 20:28:54.961124  398903 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 20:28:54.961132  398903 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 20:28:54.961138  398903 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:28:54.961142  398903 command_runner.go:130] > # CNI plugins.
	I1212 20:28:54.961146  398903 command_runner.go:130] > [crio.network]
	I1212 20:28:54.961152  398903 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:28:54.961159  398903 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:28:54.961164  398903 command_runner.go:130] > # cni_default_network = ""
	I1212 20:28:54.961171  398903 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:28:54.961179  398903 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:28:54.961185  398903 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:28:54.961189  398903 command_runner.go:130] > # plugin_dirs = [
	I1212 20:28:54.961195  398903 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:28:54.961198  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961209  398903 command_runner.go:130] > # List of included pod metrics.
	I1212 20:28:54.961213  398903 command_runner.go:130] > # included_pod_metrics = [
	I1212 20:28:54.961217  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961224  398903 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:28:54.961228  398903 command_runner.go:130] > [crio.metrics]
	I1212 20:28:54.961234  398903 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:28:54.961243  398903 command_runner.go:130] > # enable_metrics = false
	I1212 20:28:54.961248  398903 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:28:54.961253  398903 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:28:54.961262  398903 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:28:54.961271  398903 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:28:54.961280  398903 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:28:54.961285  398903 command_runner.go:130] > # metrics_collectors = [
	I1212 20:28:54.961291  398903 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:28:54.961296  398903 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 20:28:54.961302  398903 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:28:54.961306  398903 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:28:54.961311  398903 command_runner.go:130] > # 	"operations_total",
	I1212 20:28:54.961315  398903 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:28:54.961320  398903 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:28:54.961324  398903 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:28:54.961328  398903 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:28:54.961333  398903 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:28:54.961338  398903 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:28:54.961342  398903 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:28:54.961346  398903 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:28:54.961351  398903 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:28:54.961358  398903 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 20:28:54.961363  398903 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 20:28:54.961374  398903 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 20:28:54.961377  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961383  398903 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 20:28:54.961389  398903 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 20:28:54.961394  398903 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:28:54.961398  398903 command_runner.go:130] > # metrics_port = 9090
	I1212 20:28:54.961404  398903 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:28:54.961409  398903 command_runner.go:130] > # metrics_socket = ""
	I1212 20:28:54.961420  398903 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:28:54.961429  398903 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:28:54.961440  398903 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:28:54.961445  398903 command_runner.go:130] > # certificate on any modification event.
	I1212 20:28:54.961452  398903 command_runner.go:130] > # metrics_cert = ""
	I1212 20:28:54.961458  398903 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:28:54.961464  398903 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:28:54.961470  398903 command_runner.go:130] > # metrics_key = ""
	I1212 20:28:54.961476  398903 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:28:54.961480  398903 command_runner.go:130] > [crio.tracing]
	I1212 20:28:54.961487  398903 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:28:54.961491  398903 command_runner.go:130] > # enable_tracing = false
	I1212 20:28:54.961499  398903 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:28:54.961504  398903 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 20:28:54.961513  398903 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 20:28:54.961520  398903 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:28:54.961527  398903 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 20:28:54.961530  398903 command_runner.go:130] > [crio.nri]
	I1212 20:28:54.961534  398903 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 20:28:54.961544  398903 command_runner.go:130] > # enable_nri = true
	I1212 20:28:54.961548  398903 command_runner.go:130] > # NRI socket to listen on.
	I1212 20:28:54.961553  398903 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 20:28:54.961559  398903 command_runner.go:130] > # NRI plugin directory to use.
	I1212 20:28:54.961564  398903 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 20:28:54.961569  398903 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 20:28:54.961574  398903 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 20:28:54.961579  398903 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 20:28:54.961660  398903 command_runner.go:130] > # nri_disable_connections = false
	I1212 20:28:54.961672  398903 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 20:28:54.961678  398903 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 20:28:54.961683  398903 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 20:28:54.961689  398903 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 20:28:54.961696  398903 command_runner.go:130] > # NRI default validator configuration.
	I1212 20:28:54.961703  398903 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 20:28:54.961717  398903 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 20:28:54.961722  398903 command_runner.go:130] > # can be restricted/rejected:
	I1212 20:28:54.961728  398903 command_runner.go:130] > # - OCI hook injection
	I1212 20:28:54.961734  398903 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 20:28:54.961740  398903 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 20:28:54.961747  398903 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 20:28:54.961752  398903 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 20:28:54.961759  398903 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 20:28:54.961766  398903 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 20:28:54.961775  398903 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 20:28:54.961779  398903 command_runner.go:130] > #
	I1212 20:28:54.961783  398903 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 20:28:54.961791  398903 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 20:28:54.961796  398903 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 20:28:54.961802  398903 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 20:28:54.961810  398903 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 20:28:54.961815  398903 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 20:28:54.961821  398903 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 20:28:54.961828  398903 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 20:28:54.961831  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961838  398903 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 20:28:54.961845  398903 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:28:54.961851  398903 command_runner.go:130] > [crio.stats]
	I1212 20:28:54.961860  398903 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:28:54.961866  398903 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:28:54.961872  398903 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:28:54.961879  398903 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 20:28:54.961889  398903 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 20:28:54.961894  398903 command_runner.go:130] > # collection_period = 0
	I1212 20:28:54.961945  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912485774Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 20:28:54.961961  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912523214Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 20:28:54.961978  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912551908Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 20:28:54.961989  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912577237Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 20:28:54.962000  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912661332Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.962016  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912929282Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 20:28:54.962028  398903 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:28:54.962158  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:54.962172  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:54.962187  398903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:28:54.962211  398903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:28:54.962351  398903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:28:54.962430  398903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:28:54.969281  398903 command_runner.go:130] > kubeadm
	I1212 20:28:54.969300  398903 command_runner.go:130] > kubectl
	I1212 20:28:54.969304  398903 command_runner.go:130] > kubelet
	I1212 20:28:54.970141  398903 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:28:54.970208  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:28:54.977797  398903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:28:54.990948  398903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:28:55.010887  398903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 20:28:55.035195  398903 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:28:55.039688  398903 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 20:28:55.039770  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.162925  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:55.180455  398903 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:28:55.180486  398903 certs.go:195] generating shared ca certs ...
	I1212 20:28:55.180503  398903 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.180666  398903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:28:55.180714  398903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:28:55.180726  398903 certs.go:257] generating profile certs ...
	I1212 20:28:55.180830  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:28:55.180895  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:28:55.180950  398903 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:28:55.180963  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:28:55.180976  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:28:55.180993  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:28:55.181015  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:28:55.181034  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:28:55.181047  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:28:55.181062  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:28:55.181077  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:28:55.181130  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:28:55.181167  398903 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:28:55.181180  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:28:55.181208  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:28:55.181238  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:28:55.181263  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:28:55.181322  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:55.181358  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.181374  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.181387  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.181918  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:28:55.205330  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:28:55.228282  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:28:55.247851  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:28:55.266269  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:28:55.284183  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:28:55.302120  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:28:55.319891  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:28:55.338073  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:28:55.356708  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:28:55.374821  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:28:55.392459  398903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:28:55.405239  398903 ssh_runner.go:195] Run: openssl version
	I1212 20:28:55.411334  398903 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 20:28:55.411437  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.418985  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:28:55.426485  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430183  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430452  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430510  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.471108  398903 command_runner.go:130] > b5213941
	I1212 20:28:55.471637  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:28:55.479292  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.486905  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:28:55.494608  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498479  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498582  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498669  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.541933  398903 command_runner.go:130] > 51391683
	I1212 20:28:55.542454  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:28:55.550083  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.558343  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:28:55.567964  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571832  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571862  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571932  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.617329  398903 command_runner.go:130] > 3ec20f2e
	I1212 20:28:55.617911  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:28:55.625593  398903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629390  398903 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629419  398903 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 20:28:55.629427  398903 command_runner.go:130] > Device: 259,1	Inode: 1315224     Links: 1
	I1212 20:28:55.629433  398903 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:55.629439  398903 command_runner.go:130] > Access: 2025-12-12 20:24:47.845478497 +0000
	I1212 20:28:55.629445  398903 command_runner.go:130] > Modify: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629449  398903 command_runner.go:130] > Change: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629454  398903 command_runner.go:130] >  Birth: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629525  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:28:55.669986  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.670463  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:28:55.711204  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.711650  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:28:55.751880  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.752298  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:28:55.793260  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.793349  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:28:55.836082  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.836162  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:28:55.878637  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.879114  398903 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:55.879241  398903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:28:55.879321  398903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:28:55.906646  398903 cri.go:89] found id: ""
	I1212 20:28:55.906721  398903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:28:55.913746  398903 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 20:28:55.913771  398903 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 20:28:55.913778  398903 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 20:28:55.914790  398903 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:28:55.914807  398903 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:28:55.914874  398903 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:28:55.922292  398903 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:28:55.922687  398903 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-261311" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.922785  398903 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "functional-261311" cluster setting kubeconfig missing "functional-261311" context setting]
	I1212 20:28:55.923055  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.923461  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.923610  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.924164  398903 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:28:55.924185  398903 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:28:55.924192  398903 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:28:55.924198  398903 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:28:55.924202  398903 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:28:55.924512  398903 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:28:55.924617  398903 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:28:55.932459  398903 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:28:55.932497  398903 kubeadm.go:602] duration metric: took 17.683266ms to restartPrimaryControlPlane
	I1212 20:28:55.932527  398903 kubeadm.go:403] duration metric: took 53.402973ms to StartCluster
	I1212 20:28:55.932549  398903 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.932634  398903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.933272  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.933478  398903 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:28:55.933879  398903 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:28:55.933961  398903 addons.go:70] Setting storage-provisioner=true in profile "functional-261311"
	I1212 20:28:55.933975  398903 addons.go:239] Setting addon storage-provisioner=true in "functional-261311"
	I1212 20:28:55.933999  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.933941  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:55.934065  398903 addons.go:70] Setting default-storageclass=true in profile "functional-261311"
	I1212 20:28:55.934077  398903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-261311"
	I1212 20:28:55.934349  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.934437  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.939847  398903 out.go:179] * Verifying Kubernetes components...
	I1212 20:28:55.942718  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.970904  398903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:28:55.971648  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.971825  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.972098  398903 addons.go:239] Setting addon default-storageclass=true in "functional-261311"
	I1212 20:28:55.972128  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.972592  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.974802  398903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:55.974826  398903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:28:55.974884  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.016147  398903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.016169  398903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:28:56.016234  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.029989  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.052293  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.147892  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:56.182806  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.199875  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:56.957368  398903 node_ready.go:35] waiting up to 6m0s for node "functional-261311" to be "Ready" ...
	I1212 20:28:56.957463  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957488  398903 type.go:168] "Request Body" body=""
	I1212 20:28:56.957545  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1212 20:28:56.957546  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957630  398903 retry.go:31] will retry after 313.594755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957713  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:56.957754  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957788  398903 retry.go:31] will retry after 317.565464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.272396  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.275890  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.344322  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.344435  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.344471  398903 retry.go:31] will retry after 221.297028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351139  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.351181  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351200  398903 retry.go:31] will retry after 309.802672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.458417  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.458511  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.566100  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.625592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.625687  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.625728  398903 retry.go:31] will retry after 499.665469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.661822  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.729487  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.729527  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.729550  398903 retry.go:31] will retry after 503.664724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.958134  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.958421  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.126013  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:58.197757  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.197828  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.197853  398903 retry.go:31] will retry after 1.10540153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.234015  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:58.297441  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.297548  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.297576  398903 retry.go:31] will retry after 1.092264057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:28:58.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:28:59.303542  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:59.364708  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.364773  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.364796  398903 retry.go:31] will retry after 1.503349263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.390910  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:59.449881  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.449970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.450009  398903 retry.go:31] will retry after 1.024940216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.457981  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.458049  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.458335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:59.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.957671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.957942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.457683  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.475497  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:00.543993  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.544048  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.544072  398903 retry.go:31] will retry after 2.24833219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.868438  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:00.926476  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.930138  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.930173  398903 retry.go:31] will retry after 1.556562441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.958315  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.958392  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:00.958787  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:01.458585  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.458668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.458995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:01.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.958122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.457889  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.457969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.458299  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.487755  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:02.545597  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.549667  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.549705  398903 retry.go:31] will retry after 1.726891228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.793114  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:02.856403  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.860058  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.860101  398903 retry.go:31] will retry after 3.686133541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.958383  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.958453  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.958724  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:03.458506  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.458589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.458945  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:03.459000  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:03.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.958210  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.277666  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:04.331675  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:04.335668  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.335700  398903 retry.go:31] will retry after 4.014847664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.457944  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.458019  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.458285  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.457751  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.457828  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.958009  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.958416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:05.958469  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:06.458265  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:06.546991  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:06.607592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:06.607644  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.607664  398903 retry.go:31] will retry after 4.884355554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.958195  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.958538  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.458326  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.458394  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.458746  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.958480  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.958781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:07.958832  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:08.351452  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:08.404529  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:08.407970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.408008  398903 retry.go:31] will retry after 4.723006947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.458208  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.458304  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:08.958349  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.958418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.458637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.458962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.957658  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.958100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:10.458537  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.458602  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.458869  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:10.458910  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:10.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.458416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.492814  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:11.557889  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:11.557940  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.557960  398903 retry.go:31] will retry after 4.177574733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.958412  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.958494  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.958766  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:12.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.458627  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.458916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:12.458972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:12.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.958047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.131713  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:13.192350  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:13.192414  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.192433  398903 retry.go:31] will retry after 8.846505763s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.957726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.457780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.457878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.458172  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.957968  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.958296  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:14.958356  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:15.457665  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.457745  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.458081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:15.737088  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:15.794323  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:15.794363  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.794386  398903 retry.go:31] will retry after 13.823463892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.958001  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.958077  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.958395  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.458178  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.458264  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.458517  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.958364  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:16.958807  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:17.458384  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.458800  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:17.958573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.958679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.958934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:19.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:19.458044  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:19.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.457635  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.458035  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.957568  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.957646  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:21.457974  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.458051  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.458401  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:21.458459  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:21.958216  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.958620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.040027  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:22.098166  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:22.102301  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.102333  398903 retry.go:31] will retry after 9.311877294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.458542  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.458608  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.458864  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.957965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.957780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.957869  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.958143  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:23.958184  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:24.457666  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.457740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:24.957754  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.957831  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.457956  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.958502  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.958583  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:25.958993  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:26.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.458131  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:26.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.957860  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.958177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.457614  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.457693  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.957616  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:28.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.458119  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:28.458170  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:28.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.957713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.457661  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.458113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.618498  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:29.673247  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:29.677091  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.677126  398903 retry.go:31] will retry after 12.247484069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.958487  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.958556  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.958828  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.957764  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:30.958221  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:31.415106  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:31.457708  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.457795  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:31.477657  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:31.481452  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.481486  398903 retry.go:31] will retry after 29.999837192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.958329  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.958678  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.458335  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.458415  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.958367  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.958440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.958702  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:32.958743  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:33.458498  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.458574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.458942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:33.957518  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.957939  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.457617  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.457695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.957613  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:35.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.458075  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:35.458135  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:35.957713  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.457989  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.458070  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.458457  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.958268  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.958361  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.958681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:37.458419  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.458489  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.458760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:37.458803  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:37.958548  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.958989  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.457703  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.457783  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.458130  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.957909  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:39.958142  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:40.458512  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.458875  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:40.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.957663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.957999  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.458005  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.458079  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.458415  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.924900  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:41.958510  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.958584  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.958850  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:41.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:42.001052  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:42.001094  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.001115  398903 retry.go:31] will retry after 30.772279059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.457672  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.457755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.458082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:42.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.458540  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.458610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.458870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.957586  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.958032  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:44.457633  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.457707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.458045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:44.458100  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:44.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.958170  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.457726  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.458152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.957997  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.958445  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:46.458286  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.458355  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.458622  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:46.458663  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:46.958455  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.958553  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.958947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.457794  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.457932  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.458463  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.958292  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.958370  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:48.458483  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.458899  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:48.458971  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:48.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.958090  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.457649  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.457920  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.957681  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.958050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.457756  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.457838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.458163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.957983  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:50.958033  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:51.457978  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.458054  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.458398  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:51.958201  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.958282  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.958598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.458345  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.458418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.958540  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.958883  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:52.958945  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:53.457615  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.457698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:53.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.957674  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.957892  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.958225  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:55.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.457654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.457934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:55.457987  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:55.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.958319  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.458108  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.458185  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.458525  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.958317  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.958572  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:57.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:57.458880  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:57.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.957685  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.457591  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.457943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.957737  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.958104  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.457826  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.457924  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.458273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.958054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:59.958118  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:00.457778  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.457870  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:00.958235  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.958755  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.460861  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.460950  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.461277  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.481640  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:01.559465  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:01.559521  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.559544  398903 retry.go:31] will retry after 33.36515596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.958099  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.958188  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:01.958533  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:02.458305  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.458381  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.458719  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:02.958386  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.958745  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.457694  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:04.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.458056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:04.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:04.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.958103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.457691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.457777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.458124  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.958166  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.958257  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.958561  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:06.458375  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.458451  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.458788  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:06.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:06.957529  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.957955  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.457552  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.457657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.957700  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.957780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.457728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.458065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.957730  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:08.958162  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:09.457851  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.457929  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.458309  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:09.958049  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.958147  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.958566  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.458707  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.958517  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.958916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:10.958976  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:11.457913  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.458009  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.458358  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:11.958078  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.958148  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.958429  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.458295  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.458371  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.458726  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.774318  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:12.840421  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:12.840464  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.840483  398903 retry.go:31] will retry after 30.011296842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.957679  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.957756  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:13.457610  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:13.457978  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:13.957691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.957779  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.958199  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.457821  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.458184  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.958021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:15.457670  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.458088  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:15.458148  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:15.958126  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.958215  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.958644  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.458429  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.458692  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.958433  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.958508  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.958865  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:17.458563  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.458662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.459072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:17.459137  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:17.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.957765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.957740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.958158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.457570  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.457653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.957747  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:19.958157  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:20.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.458135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:20.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.957690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.958023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.458249  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.458570  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.958397  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.958474  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.958860  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:21.958919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:22.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.457650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.457962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:22.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.957818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.958168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:24.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:24.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:24.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.957748  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.958123  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.457534  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.457604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.457872  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.958565  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.958637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.958933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:26.457975  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.458048  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.458392  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:26.458450  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:26.957925  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.957996  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.958288  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.457662  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.458086  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.957807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.957887  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.958218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.957686  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.957778  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.958129  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:28.958185  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:29.457860  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.457948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.458268  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:29.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.957934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.457654  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.957859  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:30.958301  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:31.458270  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.458363  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.458639  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:31.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.958925  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.457675  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.957526  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.957599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.957876  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:33.457638  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:33.458151  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:33.957835  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.957912  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.457709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.458076  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.925852  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:34.958350  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.958426  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.958704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.987024  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990602  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990708  398903 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:35.458275  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.458681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:35.458739  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:35.958407  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.958762  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.457712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.458038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.457790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.957761  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.958213  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:37.958272  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:38.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.458016  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:38.958134  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.958210  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.958478  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.458248  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.458336  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.458729  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.958456  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.958539  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.958888  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:39.958942  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:40.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.457648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.457967  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:40.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.958059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.958252  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.958327  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.958608  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:42.458416  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.458492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.458825  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:42.458889  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:42.852572  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:42.917565  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921658  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921759  398903 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:42.924799  398903 out.go:179] * Enabled addons: 
	I1212 20:30:42.926930  398903 addons.go:530] duration metric: took 1m46.993054127s for enable addons: enabled=[]
	I1212 20:30:42.957819  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.957896  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.958219  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.457528  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.457600  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.458022  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.957587  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.957941  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:44.957982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:45.457697  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.457796  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.458121  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:45.958191  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.958612  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.458444  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.458532  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.957599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.958064  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:46.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:47.457807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.458266  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:47.957963  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.958044  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.958323  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.457878  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.457954  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.458353  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.957937  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.958025  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.958407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:48.958465  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:49.458150  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.458217  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.458483  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:49.958339  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.958422  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.958782  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.457522  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.457619  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:51.457956  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.458033  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.458372  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:51.458436  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:51.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.958760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.458531  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.458606  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.458887  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.957701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.457803  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.457880  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.458232  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.957948  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.958039  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.958314  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:53.958357  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:54.458007  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.458120  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.458562  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:54.957657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.957767  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.958125  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.457599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.457671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.958592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:55.959020  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:56.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.457702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:56.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.957655  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.957949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.457710  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.458063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.958430  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.958528  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.958868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:58.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:58.458062  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:58.957718  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.958154  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.457651  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.957798  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.957888  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.958201  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:00.457692  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.457780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:00.458250  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:00.957940  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.958024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.458223  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.458299  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.458574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.958306  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.958388  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.958736  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:02.458565  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.458645  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.459016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:02.459076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:02.957720  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.457664  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.957853  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.957937  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.958274  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.457595  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.458030  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.957597  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:04.958098  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:05.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.457701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:05.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.957863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.958194  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.458145  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.458228  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.958415  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.958493  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.958820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:06.958879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:07.457506  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.457575  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.457849  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:07.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.957714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.457776  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.457879  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.458223  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.957577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.957652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:09.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.457705  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:09.458076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:09.957794  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.957907  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.958279  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.457971  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.458382  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.958220  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.958714  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:11.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:11.458138  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:11.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.458031  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.957743  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.957841  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:13.458376  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.458443  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.458763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:13.458818  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:13.958577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.958652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.958977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.458101  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.957799  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.957875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.958197  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.457653  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.458080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.958204  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.958537  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:15.958599  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:16.458429  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.458501  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:16.957534  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.957617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.957998  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.457728  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.457806  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.458115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.957591  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:18.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.457847  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.458133  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:18.458180  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:18.957696  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.457727  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.458140  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.957742  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.457686  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.957650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.957923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:20.957972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:21.457915  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.457990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.458320  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:21.958165  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.958276  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.958607  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.458365  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.458716  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.958558  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.958659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.959007  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:22.959071  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:23.457766  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:23.957896  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.957969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.958315  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.457613  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.457714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.958115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:25.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:25.458017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:25.958041  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.958123  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.958512  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.458319  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.458398  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.958549  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.958846  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:27.457587  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.457677  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.457993  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:27.458047  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:27.957637  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.457523  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.457597  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.957667  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.957755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:29.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.458112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:29.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:29.957515  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.957590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.957922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.458057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.957854  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:31.458036  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.458104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:31.458409  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:31.958181  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.958643  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.458473  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.458949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.958012  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.457738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.957824  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.957905  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.958247  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:33.958303  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:34.458003  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.458078  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.458409  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:34.958240  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.958349  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.458572  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.458682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.459077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.958480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.958555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.958847  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:35.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:36.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.458167  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:36.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.957948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.958275  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.457594  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.958057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:38.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:38.458189  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:38.957510  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.957592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.957862  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.457578  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.457664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.957715  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.958106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.457964  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.958114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:40.958173  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:41.457926  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.458028  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.458354  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:41.958180  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.958256  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.958548  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.458349  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.458439  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.458833  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.958514  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.958594  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.958932  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:42.958992  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:43.457618  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.458058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:43.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.958071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.457779  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.457857  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.458177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.957657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:45.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.458010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:45.458070  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:45.957784  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.957877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.458071  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.458414  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.958212  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.958295  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.958642  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:47.458480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.458558  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.458926  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:47.458982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:47.957584  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.957658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.957921  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.457764  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.458171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.957862  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.957972  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.958326  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.458004  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.458083  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.458381  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.958209  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.958290  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.958636  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:49.958695  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:50.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.458818  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:50.957496  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.957563  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.458084  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.957648  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:52.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.457781  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.458111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:52.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:52.957662  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.957750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.457800  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.457898  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.458256  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.958171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:54.958225  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:55.457602  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.457942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:55.957857  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.957935  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.458155  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.458540  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.958285  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.958359  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.958625  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:56.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:57.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.458823  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:57.958474  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.958559  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.457647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.457965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:59.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:59.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:59.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.957976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.457722  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.457811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.458158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.958017  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.958101  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:01.458294  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.458366  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.458700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:01.458759  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:01.958578  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.958660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.959010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.957736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.958135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:03.958124  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:04.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.457689  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:04.957738  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.957816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.457928  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.458292  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.958124  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.958202  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.958466  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:05.958511  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:06.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.458469  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.458820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:06.957560  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.958040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.457620  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.457897  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.957602  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:08.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.458006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:08.458064  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:08.958540  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.958617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.958908  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.457660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.458015  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.958016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.457990  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.958058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:10.958119  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:11.458077  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.458157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:11.958236  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.958308  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.958586  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.458497  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.458856  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.957638  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:13.460759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.460830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.461068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:13.461109  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:13.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.957849  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.958216  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.957890  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.957960  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.958230  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.458122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.957985  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.958378  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:15.958434  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:16.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.458504  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:16.958300  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.958386  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.958758  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.458639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.458986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.957715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.958109  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:18.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.458061  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:18.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:18.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.457938  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.957777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.958136  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.458047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.957741  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.957811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:20.958125  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:21.458048  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.458126  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.458473  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:21.958279  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.458484  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.458765  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.958550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:22.959017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:23.457629  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:23.957725  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.957800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.958134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:25.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:25.458090  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:25.958111  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.958187  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.958536  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.458306  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.458383  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.458747  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.958505  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.958576  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.958841  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:27.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:27.458127  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:27.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.957874  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.958233  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.457931  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.457998  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.458263  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.957554  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.957977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.457711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.957530  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.957906  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:29.957953  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:30.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.458040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:30.957778  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.458073  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.458140  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.458418  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.958203  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.958278  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.958617  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:31.958671  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:32.458448  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.458537  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.458868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:32.957533  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.957933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.458036  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:34.457588  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.457997  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:34.458054  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:34.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.957770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:36.458166  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.458243  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.458598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:36.458654  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:36.958444  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.958533  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.958889  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.458453  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.458552  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.458884  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.957686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.457739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.957536  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.957905  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:38.957951  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:39.457634  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:39.957793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.957878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.458558  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.458626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.458896  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:40.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:41.457917  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.458003  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.458345  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:41.958008  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.958090  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.958391  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.458186  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.458268  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.458645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.958471  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.958551  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.958913  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:42.958969  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:43.457567  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.457639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.457970  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:43.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.958127  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.457848  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.457925  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.458300  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.957921  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.957989  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.958269  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:45.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:45.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:45.957919  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.458249  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.958392  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.958479  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.457637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.457976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.957652  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.957996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:47.958035  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:48.457660  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.458085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:48.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.958068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.457759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.458095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.957718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:49.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:50.457791  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.457875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.458204  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:50.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.957654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.457942  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.458024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.958377  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.958463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.958946  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:51.959008  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:52.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:52.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.457745  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.457818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.458155  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.958157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.958497  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:54.458351  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.458785  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:54.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:54.957837  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.957927  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.958377  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.458049  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.958082  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.958157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.958506  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.458323  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.458789  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.958570  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.958641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.958907  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:56.958949  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:57.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:57.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.457771  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.458182  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.957910  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.957990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.958333  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:59.458167  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.458246  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.458600  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:59.458673  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:59.958419  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.958763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.458626  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.458718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.459178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.957917  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.957999  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.958339  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.458146  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.458227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.458496  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.958324  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:01.958746  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:02.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.458595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.458922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:02.957588  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.457658  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.957689  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.957766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:04.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:04.458057  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:04.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.958097  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.957795  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.957876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:06.458126  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.458201  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.458609  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:06.458666  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:06.958431  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.958510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.958861  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.458432  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.458505  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.958549  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.958631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.958975  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.457744  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.458100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.957714  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.957786  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:08.958096  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:09.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.458145  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:09.957623  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.957707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.457729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.458029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:10.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:11.457959  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.458036  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.458394  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:11.958170  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.958549  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.458358  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.458775  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.957520  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.957604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.957972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:13.458501  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.458572  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.458848  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:13.458891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:13.957574  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.457577  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.457656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.957521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.957928  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.457515  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.457593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.957742  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.957819  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:15.958212  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:16.457912  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.458249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:16.957938  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.958013  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.958371  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.458356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.957551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.957895  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:18.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:18.458060  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:18.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.457757  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.457827  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:20.457628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.458050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:20.458103  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:20.957580  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.457718  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.457793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.458138  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.957933  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.958282  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:22.457957  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.458031  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.458362  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:22.458419  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:22.958162  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.958237  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.958574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.458385  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.458462  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.958452  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.958525  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.958802  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:24.458538  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.458623  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.458972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:24.459028  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:24.957567  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.957987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.957886  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.957967  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.958322  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.458268  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.958389  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.958460  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.958721  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:26.958761  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:27.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.458621  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.458969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:27.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.957682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.958006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.457642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.457915  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.957711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:29.457799  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.457877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.458218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:29.458292  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:29.957566  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.957640  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.957986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.457705  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.457788  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.957840  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.957922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.958258  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:31.458070  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.458149  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.458407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:31.458480  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:31.958244  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.958322  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.958670  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.458475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.458902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.958550  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.457551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.457948  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:33.958117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:34.457524  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.457599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.457902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:34.957627  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.957704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.958079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.457914  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.458250  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.958142  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.958225  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.958508  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:35.958562  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:36.458394  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.458478  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.458822  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:36.957589  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.457586  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.958113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:38.457820  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.458236  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:38.458295  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:38.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.957699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.958001  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.457722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.958083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.457768  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.457840  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.458168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.957758  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:40.958231  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:41.458222  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.458298  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.458630  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:41.958341  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.958427  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.958700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.458591  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.458943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:43.457746  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.457813  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.458089  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:43.458129  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:43.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.957883  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.457980  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.458055  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.458393  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.958151  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.958223  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:45.458269  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.458343  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.458708  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:45.458764  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:45.958513  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.958931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.457565  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.457633  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.957631  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.958128  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.457922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.458245  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.957545  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.957618  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:47.957963  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:48.457643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:48.957629  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.457729  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.458103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.957633  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:49.958114  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:50.457640  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:50.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.458156  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.458244  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.458588  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.958840  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:51.958897  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:52.458422  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.458781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:52.958521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.958596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.958935  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.457641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.457994  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.957675  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.957749  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.958046  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:54.457737  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.457815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.458164  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:54.458229  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:54.957758  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.958073  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.958151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.958481  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:56.458356  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.458518  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.458867  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:56.458919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:56.958475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.958546  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.958806  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.457573  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.457662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.957708  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.958149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.457519  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.457596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.957618  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.957702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:58.958086  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:59.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.457717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.458079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:59.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.957695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.958025  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.457770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.458220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.957723  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.957815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.958152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:00.958209  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:01.458053  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.458124  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.458397  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:01.958241  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.958318  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.458431  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.458517  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.458903  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.958593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.958871  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:02.958913  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:03.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.457665  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.458014  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:03.957750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.958178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.457755  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.458106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.957792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.957872  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.958222  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:05.457932  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.458011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.458316  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:05.458363  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:05.958224  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.958347  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.958674  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.457554  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.457980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.958087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.457764  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.457837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.458126  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:07.958131  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:08.457790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.457867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.458190  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:08.957583  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.958018  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.457986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.957661  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:10.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.458044  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:10.458120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:10.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.958069  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.457925  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.458005  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.458337  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.957987  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.457716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.957844  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.958153  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:12.958206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:13.457572  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.457652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:13.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.957752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.458033  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.957980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:15.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.457800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.458149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:15.458206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.958356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.458302  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.458374  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.458653  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.958451  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.958529  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.958870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.457741  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.957571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.958005  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:17.958058  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:18.457731  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.457820  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.458202  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:18.957933  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.958011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.457582  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.457658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.457973  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.958037  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:19.958084  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:20.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.457726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:20.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.957830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.458132  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.458454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.958169  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.958248  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.958614  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:21.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:22.458387  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.458712  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:22.958495  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.958574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.958894  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.957931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:24.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:24.458117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:24.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.958072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.458023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.958118  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.958454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:26.458388  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.458463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:26.458879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:26.958476  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.958814  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.458579  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.458656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.458987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.957727  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.957802  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.958162  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.458439  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.458510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.458774  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.958512  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.958589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.958911  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:28.958974  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:29.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:29.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.958161  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.457641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.458083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.958024  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:31.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.458012  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.458336  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:31.458388  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:31.958144  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.958581  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.458466  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.458569  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.458930  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.957985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.957814  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.957889  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.958221  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:33.958279  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:34.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.457651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:34.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.957724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.457792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.457876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.958034  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.958104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.958369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:35.958411  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:36.458355  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.458432  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.458815  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:36.957543  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.957626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.957947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.457995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.957635  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:38.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.458116  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:38.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:38.957684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.957762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.957975  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.958305  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.457659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:40.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:41.457945  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.458029  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.458375  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:41.958149  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.958218  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.958489  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.458344  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.458797  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.957548  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.958002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:43.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:43.458139  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:43.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.457863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.458214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.957493  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.957567  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.457549  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.457634  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.957790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.958220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:45.958281  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:46.458047  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.458139  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.458408  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:46.958199  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.958280  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.958672  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.458502  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.458578  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.458923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.957667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.958000  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:48.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:48.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:48.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.457750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.458132  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.957700  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:50.457775  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.457853  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.458187  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:50.458247  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:50.957570  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.957642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.957959  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.457904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.458001  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.458321  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.457677  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.458071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:52.958126  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:53.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:53.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.457816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.458178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.957898  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.958335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:54.958392  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:55.457874  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.457957  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.461901  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:34:55.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.957835  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.958180  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.458205  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.458289  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:56.458646  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.958348  398903 node_ready.go:38] duration metric: took 6m0.000942014s for node "functional-261311" to be "Ready" ...
	I1212 20:34:56.961249  398903 out.go:203] 
	W1212 20:34:56.963984  398903 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:34:56.964005  398903 out.go:285] * 
	* 
	W1212 20:34:56.966156  398903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:34:56.969023  398903 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-linux-arm64 start -p functional-261311 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m6.47844967s for "functional-261311" cluster.
I1212 20:34:57.632320  364853 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (372.028058ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 logs -n 25: (1.039947173s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205528 image rm kicbase/echo-server:functional-205528 --alsologtostderr                                                                │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image save --daemon kicbase/echo-server:functional-205528 --alsologtostderr                                                     │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/364853.pem                                                                                          │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /usr/share/ca-certificates/364853.pem                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/test/nested/copy/364853/hosts                                                                                 │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/3648532.pem                                                                                         │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /usr/share/ca-certificates/3648532.pem                                                                             │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format short --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format yaml --alsologtostderr                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh pgrep buildkitd                                                                                                             │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ image          │ functional-205528 image ls --format json --alsologtostderr                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr                                            │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format table --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ delete         │ -p functional-205528                                                                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ start          │ -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ start          │ -p functional-261311 --alsologtostderr -v=8                                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:28 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:28:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:28:51.200639  398903 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:28:51.200813  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.200825  398903 out.go:374] Setting ErrFile to fd 2...
	I1212 20:28:51.200844  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.201121  398903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:28:51.201526  398903 out.go:368] Setting JSON to false
	I1212 20:28:51.202423  398903 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11484,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:28:51.202499  398903 start.go:143] virtualization:  
	I1212 20:28:51.205894  398903 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:28:51.209621  398903 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:28:51.209743  398903 notify.go:221] Checking for updates...
	I1212 20:28:51.215382  398903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:28:51.218267  398903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:51.221168  398903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:28:51.224043  398903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:28:51.227018  398903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:28:51.230467  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:51.230581  398903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:28:51.269738  398903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:28:51.269857  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.341809  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.330621143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.341929  398903 docker.go:319] overlay module found
	I1212 20:28:51.347026  398903 out.go:179] * Using the docker driver based on existing profile
	I1212 20:28:51.349898  398903 start.go:309] selected driver: docker
	I1212 20:28:51.349928  398903 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.350015  398903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:28:51.350136  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.408041  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.398420734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.408534  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:51.408600  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:51.408656  398903 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.413511  398903 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:28:51.416491  398903 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:28:51.419403  398903 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:28:51.422306  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:51.422357  398903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:28:51.422368  398903 cache.go:65] Caching tarball of preloaded images
	I1212 20:28:51.422458  398903 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:28:51.422471  398903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:28:51.422591  398903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:28:51.422818  398903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:28:51.441630  398903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:28:51.441653  398903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:28:51.441676  398903 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:28:51.441708  398903 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:28:51.441778  398903 start.go:364] duration metric: took 45.9µs to acquireMachinesLock for "functional-261311"
	I1212 20:28:51.441803  398903 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:28:51.441812  398903 fix.go:54] fixHost starting: 
	I1212 20:28:51.442073  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:51.469956  398903 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:28:51.469989  398903 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:28:51.473238  398903 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:28:51.473304  398903 machine.go:94] provisionDockerMachine start ...
	I1212 20:28:51.473396  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.494630  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.494961  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.494976  398903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:28:51.648147  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.648174  398903 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:28:51.648237  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.668778  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.669090  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.669106  398903 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:28:51.829776  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.829853  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.848648  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.848971  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.848987  398903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:28:52.002627  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:28:52.002659  398903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:28:52.002689  398903 ubuntu.go:190] setting up certificates
	I1212 20:28:52.002713  398903 provision.go:84] configureAuth start
	I1212 20:28:52.002795  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:52.023958  398903 provision.go:143] copyHostCerts
	I1212 20:28:52.024006  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024050  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:28:52.024064  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024145  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:28:52.024243  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024271  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:28:52.024280  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024310  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:28:52.024357  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024421  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:28:52.024431  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024463  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:28:52.024521  398903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:28:52.567706  398903 provision.go:177] copyRemoteCerts
	I1212 20:28:52.567776  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:28:52.567821  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.585858  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:52.692768  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:28:52.692828  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:28:52.711466  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:28:52.711534  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:28:52.730742  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:28:52.730815  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:28:52.749109  398903 provision.go:87] duration metric: took 746.363484ms to configureAuth
	I1212 20:28:52.749138  398903 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:28:52.749373  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:52.749480  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.767233  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:52.767548  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:52.767570  398903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:28:53.124031  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:28:53.124063  398903 machine.go:97] duration metric: took 1.650735569s to provisionDockerMachine
	I1212 20:28:53.124076  398903 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:28:53.124090  398903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:28:53.124184  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:28:53.124249  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.144150  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.248393  398903 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:28:53.251578  398903 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 20:28:53.251600  398903 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 20:28:53.251605  398903 command_runner.go:130] > VERSION_ID="12"
	I1212 20:28:53.251610  398903 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 20:28:53.251614  398903 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 20:28:53.251618  398903 command_runner.go:130] > ID=debian
	I1212 20:28:53.251623  398903 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 20:28:53.251629  398903 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 20:28:53.251634  398903 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 20:28:53.251713  398903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:28:53.251736  398903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:28:53.251748  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:28:53.251809  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:28:53.251889  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:28:53.251900  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:28:53.251976  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:28:53.251984  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> /etc/test/nested/copy/364853/hosts
	I1212 20:28:53.252026  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:28:53.259320  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:53.277130  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:28:53.294238  398903 start.go:296] duration metric: took 170.145848ms for postStartSetup
	I1212 20:28:53.294390  398903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:28:53.294470  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.312603  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.412930  398903 command_runner.go:130] > 11%
	I1212 20:28:53.413464  398903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:28:53.417828  398903 command_runner.go:130] > 174G
	I1212 20:28:53.418334  398903 fix.go:56] duration metric: took 1.976518079s for fixHost
	I1212 20:28:53.418383  398903 start.go:83] releasing machines lock for "functional-261311", held for 1.976583573s
	I1212 20:28:53.418465  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:53.435134  398903 ssh_runner.go:195] Run: cat /version.json
	I1212 20:28:53.435190  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.435445  398903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:28:53.435511  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.452987  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.462005  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.555880  398903 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 20:28:53.556060  398903 ssh_runner.go:195] Run: systemctl --version
	I1212 20:28:53.643428  398903 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:28:53.646219  398903 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 20:28:53.646272  398903 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 20:28:53.646362  398903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:28:53.685489  398903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:28:53.690919  398903 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:28:53.690960  398903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:28:53.691016  398903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:28:53.699790  398903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:28:53.699851  398903 start.go:496] detecting cgroup driver to use...
	I1212 20:28:53.699883  398903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:28:53.699937  398903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:28:53.716256  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:28:53.731380  398903 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:28:53.731442  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:28:53.747947  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:28:53.763704  398903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:28:53.877723  398903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:28:53.997385  398903 docker.go:234] disabling docker service ...
	I1212 20:28:53.997457  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:28:54.016313  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:28:54.032112  398903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:28:54.157667  398903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:28:54.273189  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:28:54.288211  398903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:28:54.301284  398903 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:28:54.302509  398903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:28:54.302613  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.311343  398903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:28:54.311460  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.320776  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.330058  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.340191  398903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:28:54.348326  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.357164  398903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.365464  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.374528  398903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:28:54.381778  398903 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 20:28:54.382795  398903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:28:54.390360  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:54.529224  398903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:28:54.703666  398903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:28:54.703740  398903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:28:54.707780  398903 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:28:54.707808  398903 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:28:54.707826  398903 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1212 20:28:54.707834  398903 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:54.707840  398903 command_runner.go:130] > Access: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707850  398903 command_runner.go:130] > Modify: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707858  398903 command_runner.go:130] > Change: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707861  398903 command_runner.go:130] >  Birth: -
	I1212 20:28:54.707934  398903 start.go:564] Will wait 60s for crictl version
	I1212 20:28:54.708017  398903 ssh_runner.go:195] Run: which crictl
	I1212 20:28:54.711729  398903 command_runner.go:130] > /usr/local/bin/crictl
	I1212 20:28:54.711909  398903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:28:54.737852  398903 command_runner.go:130] > Version:  0.1.0
	I1212 20:28:54.737888  398903 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:28:54.737895  398903 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 20:28:54.737901  398903 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:28:54.740042  398903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:28:54.740184  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.769676  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.769713  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.769720  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.769725  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.769750  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.769764  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.769768  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.769788  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.769802  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.769806  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.769810  398903 command_runner.go:130] >      static
	I1212 20:28:54.769813  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.769832  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.769838  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.769842  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.769849  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.769852  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.769859  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.769867  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.769872  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.769969  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.796781  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.796850  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.796873  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.796896  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.796933  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.796961  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.796982  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.797005  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.797036  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.797055  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.797071  398903 command_runner.go:130] >      static
	I1212 20:28:54.797089  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.797108  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.797151  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.797177  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.797197  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.797231  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.797262  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.797290  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.797309  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.804038  398903 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:28:54.806949  398903 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:28:54.823441  398903 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:28:54.827623  398903 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 20:28:54.827865  398903 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:28:54.827977  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:54.828031  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.860175  398903 command_runner.go:130] > {
	I1212 20:28:54.860197  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.860201  398903 command_runner.go:130] >     {
	I1212 20:28:54.860214  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.860219  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860225  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.860229  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860233  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860242  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.860250  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.860254  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860258  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.860263  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860270  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860274  398903 command_runner.go:130] >     },
	I1212 20:28:54.860277  398903 command_runner.go:130] >     {
	I1212 20:28:54.860285  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.860289  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860295  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.860298  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860302  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860310  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.860333  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.860341  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860346  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.860350  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860357  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860360  398903 command_runner.go:130] >     },
	I1212 20:28:54.860363  398903 command_runner.go:130] >     {
	I1212 20:28:54.860391  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.860396  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860401  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.860404  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860408  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860417  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.860425  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.860428  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860434  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.860439  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.860443  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860447  398903 command_runner.go:130] >     },
	I1212 20:28:54.860456  398903 command_runner.go:130] >     {
	I1212 20:28:54.860463  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.860467  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860472  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.860478  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860482  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860490  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.860497  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.860505  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860510  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.860513  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860517  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860521  398903 command_runner.go:130] >       },
	I1212 20:28:54.860530  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860534  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860540  398903 command_runner.go:130] >     },
	I1212 20:28:54.860546  398903 command_runner.go:130] >     {
	I1212 20:28:54.860552  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.860558  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860564  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.860567  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860577  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860594  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.860603  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.860610  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860614  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.860618  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860622  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860625  398903 command_runner.go:130] >       },
	I1212 20:28:54.860630  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860636  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860639  398903 command_runner.go:130] >     },
	I1212 20:28:54.860643  398903 command_runner.go:130] >     {
	I1212 20:28:54.860652  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.860659  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860665  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.860668  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860672  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860684  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.860695  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.860698  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860702  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.860706  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860711  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860717  398903 command_runner.go:130] >       },
	I1212 20:28:54.860721  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860726  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860739  398903 command_runner.go:130] >     },
	I1212 20:28:54.860747  398903 command_runner.go:130] >     {
	I1212 20:28:54.860754  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.860760  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860766  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.860769  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860773  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860781  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.860792  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.860796  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860801  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.860807  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860811  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860817  398903 command_runner.go:130] >     },
	I1212 20:28:54.860820  398903 command_runner.go:130] >     {
	I1212 20:28:54.860827  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.860831  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860839  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.860844  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860854  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860863  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.860876  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.860883  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860887  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.860891  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860895  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860905  398903 command_runner.go:130] >       },
	I1212 20:28:54.860908  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860912  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860922  398903 command_runner.go:130] >     },
	I1212 20:28:54.860925  398903 command_runner.go:130] >     {
	I1212 20:28:54.860932  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.860938  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860944  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.860948  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860953  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860961  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.860971  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.860975  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860979  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.860984  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860991  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.860994  398903 command_runner.go:130] >       },
	I1212 20:28:54.861000  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.861004  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.861014  398903 command_runner.go:130] >     }
	I1212 20:28:54.861017  398903 command_runner.go:130] >   ]
	I1212 20:28:54.861020  398903 command_runner.go:130] > }
	I1212 20:28:54.861204  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.861218  398903 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:28:54.861275  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.883482  398903 command_runner.go:130] > {
	I1212 20:28:54.883501  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.883506  398903 command_runner.go:130] >     {
	I1212 20:28:54.883514  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.883520  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883526  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.883529  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883533  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883547  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.883556  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.883560  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883564  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.883568  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883574  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883577  398903 command_runner.go:130] >     },
	I1212 20:28:54.883580  398903 command_runner.go:130] >     {
	I1212 20:28:54.883587  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.883591  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883597  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.883600  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883604  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883612  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.883620  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.883624  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883628  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.883632  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883638  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883641  398903 command_runner.go:130] >     },
	I1212 20:28:54.883645  398903 command_runner.go:130] >     {
	I1212 20:28:54.883652  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.883656  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883663  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.883666  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883670  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883679  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.883687  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.883690  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883695  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.883699  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.883702  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883706  398903 command_runner.go:130] >     },
	I1212 20:28:54.883712  398903 command_runner.go:130] >     {
	I1212 20:28:54.883719  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.883723  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883728  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.883733  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883737  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883745  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.883752  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.883756  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883759  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.883763  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883767  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883770  398903 command_runner.go:130] >       },
	I1212 20:28:54.883778  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883783  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883786  398903 command_runner.go:130] >     },
	I1212 20:28:54.883788  398903 command_runner.go:130] >     {
	I1212 20:28:54.883795  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.883798  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883804  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.883807  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883811  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883819  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.883827  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.883830  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883834  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.883838  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883842  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883845  398903 command_runner.go:130] >       },
	I1212 20:28:54.883854  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883858  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883861  398903 command_runner.go:130] >     },
	I1212 20:28:54.883864  398903 command_runner.go:130] >     {
	I1212 20:28:54.883874  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.883878  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883884  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.883888  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883891  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883899  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.883908  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.883911  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883915  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.883919  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883923  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883926  398903 command_runner.go:130] >       },
	I1212 20:28:54.883930  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883935  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883938  398903 command_runner.go:130] >     },
	I1212 20:28:54.883942  398903 command_runner.go:130] >     {
	I1212 20:28:54.883949  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.883952  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883958  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.883961  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883965  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883973  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.883981  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.883983  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883988  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.883991  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883995  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883999  398903 command_runner.go:130] >     },
	I1212 20:28:54.884002  398903 command_runner.go:130] >     {
	I1212 20:28:54.884008  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.884012  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884017  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.884020  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884030  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884038  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.884055  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.884061  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884064  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.884068  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884072  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.884075  398903 command_runner.go:130] >       },
	I1212 20:28:54.884079  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884082  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.884085  398903 command_runner.go:130] >     },
	I1212 20:28:54.884088  398903 command_runner.go:130] >     {
	I1212 20:28:54.884095  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.884099  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884103  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.884106  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884110  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884118  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.884125  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.884129  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884133  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.884137  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884141  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.884145  398903 command_runner.go:130] >       },
	I1212 20:28:54.884149  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884152  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.884155  398903 command_runner.go:130] >     }
	I1212 20:28:54.884158  398903 command_runner.go:130] >   ]
	I1212 20:28:54.884161  398903 command_runner.go:130] > }
	I1212 20:28:54.885632  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.885655  398903 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:28:54.885663  398903 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:28:54.885778  398903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:28:54.885868  398903 ssh_runner.go:195] Run: crio config
	I1212 20:28:54.934221  398903 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:28:54.934247  398903 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:28:54.934255  398903 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:28:54.934259  398903 command_runner.go:130] > #
	I1212 20:28:54.934288  398903 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:28:54.934303  398903 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:28:54.934310  398903 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:28:54.934320  398903 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:28:54.934324  398903 command_runner.go:130] > # reload'.
	I1212 20:28:54.934331  398903 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:28:54.934341  398903 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:28:54.934347  398903 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:28:54.934369  398903 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:28:54.934379  398903 command_runner.go:130] > [crio]
	I1212 20:28:54.934386  398903 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:28:54.934403  398903 command_runner.go:130] > # containers images, in this directory.
	I1212 20:28:54.934708  398903 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 20:28:54.934725  398903 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:28:54.935118  398903 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 20:28:54.935167  398903 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 20:28:54.935270  398903 command_runner.go:130] > # imagestore = ""
	I1212 20:28:54.935280  398903 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:28:54.935288  398903 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:28:54.935534  398903 command_runner.go:130] > # storage_driver = "overlay"
	I1212 20:28:54.935547  398903 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:28:54.935554  398903 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:28:54.935682  398903 command_runner.go:130] > # storage_option = [
	I1212 20:28:54.935790  398903 command_runner.go:130] > # ]
	I1212 20:28:54.935801  398903 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:28:54.935808  398903 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:28:54.935977  398903 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:28:54.935987  398903 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:28:54.936004  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:28:54.936009  398903 command_runner.go:130] > # always happen on a node reboot
	I1212 20:28:54.936228  398903 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:28:54.936250  398903 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:28:54.936257  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:28:54.936263  398903 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:28:54.936389  398903 command_runner.go:130] > # version_file_persist = ""
	I1212 20:28:54.936402  398903 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:28:54.936411  398903 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:28:54.937698  398903 command_runner.go:130] > # internal_wipe = true
	I1212 20:28:54.937721  398903 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 20:28:54.937728  398903 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 20:28:54.937860  398903 command_runner.go:130] > # internal_repair = true
	I1212 20:28:54.937871  398903 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:28:54.937878  398903 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:28:54.937885  398903 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:28:54.938097  398903 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:28:54.938132  398903 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:28:54.938152  398903 command_runner.go:130] > [crio.api]
	I1212 20:28:54.938172  398903 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:28:54.938284  398903 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:28:54.938314  398903 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:28:54.938521  398903 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:28:54.938555  398903 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:28:54.938577  398903 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:28:54.938680  398903 command_runner.go:130] > # stream_port = "0"
	I1212 20:28:54.938717  398903 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:28:54.938951  398903 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:28:54.938995  398903 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:28:54.939084  398903 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:28:54.939113  398903 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:28:54.939142  398903 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939249  398903 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:28:54.939291  398903 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:28:54.939312  398903 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939622  398903 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:28:54.939657  398903 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:28:54.939704  398903 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:28:54.939736  398903 command_runner.go:130] > # automatically pick up the changes.
	I1212 20:28:54.939811  398903 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:28:54.939858  398903 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940308  398903 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 20:28:54.940353  398903 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940776  398903 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 20:28:54.940788  398903 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:28:54.940801  398903 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:28:54.940806  398903 command_runner.go:130] > [crio.runtime]
	I1212 20:28:54.940824  398903 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:28:54.940830  398903 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:28:54.940834  398903 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:28:54.940840  398903 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:28:54.940969  398903 command_runner.go:130] > # default_ulimits = [
	I1212 20:28:54.941191  398903 command_runner.go:130] > # ]
	I1212 20:28:54.941204  398903 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:28:54.941558  398903 command_runner.go:130] > # no_pivot = false
	I1212 20:28:54.941568  398903 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:28:54.941575  398903 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:28:54.941945  398903 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:28:54.941956  398903 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:28:54.941961  398903 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:28:54.942013  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942279  398903 command_runner.go:130] > # conmon = ""
	I1212 20:28:54.942287  398903 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:28:54.942295  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:28:54.942500  398903 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:28:54.942511  398903 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:28:54.942545  398903 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:28:54.942582  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942706  398903 command_runner.go:130] > # conmon_env = [
	I1212 20:28:54.942961  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943022  398903 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:28:54.943043  398903 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:28:54.943084  398903 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:28:54.943203  398903 command_runner.go:130] > # default_env = [
	I1212 20:28:54.943456  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943514  398903 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:28:54.943537  398903 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 20:28:54.943931  398903 command_runner.go:130] > # selinux = false
	I1212 20:28:54.943943  398903 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:28:54.943997  398903 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 20:28:54.944007  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944219  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.944231  398903 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 20:28:54.944237  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944517  398903 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 20:28:54.944529  398903 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:28:54.944536  398903 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:28:54.944595  398903 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:28:54.944603  398903 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:28:54.944609  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944908  398903 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:28:54.944919  398903 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:28:54.944924  398903 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:28:54.945253  398903 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:28:54.945265  398903 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 20:28:54.945309  398903 command_runner.go:130] > # blockio parameters.
	I1212 20:28:54.945663  398903 command_runner.go:130] > # blockio_reload = false
	I1212 20:28:54.945676  398903 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:28:54.945725  398903 command_runner.go:130] > # irqbalance daemon.
	I1212 20:28:54.946100  398903 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:28:54.946111  398903 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 20:28:54.946174  398903 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 20:28:54.946186  398903 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 20:28:54.946547  398903 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 20:28:54.946561  398903 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:28:54.946567  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.946867  398903 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:28:54.946878  398903 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:28:54.947089  398903 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:28:54.947100  398903 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:28:54.947442  398903 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:28:54.947454  398903 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:28:54.947513  398903 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:28:54.947527  398903 command_runner.go:130] > # will be added.
	I1212 20:28:54.947601  398903 command_runner.go:130] > # default_capabilities = [
	I1212 20:28:54.947867  398903 command_runner.go:130] > # 	"CHOWN",
	I1212 20:28:54.948094  398903 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:28:54.948277  398903 command_runner.go:130] > # 	"FSETID",
	I1212 20:28:54.948500  398903 command_runner.go:130] > # 	"FOWNER",
	I1212 20:28:54.948701  398903 command_runner.go:130] > # 	"SETGID",
	I1212 20:28:54.948883  398903 command_runner.go:130] > # 	"SETUID",
	I1212 20:28:54.949109  398903 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:28:54.949307  398903 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:28:54.949502  398903 command_runner.go:130] > # 	"KILL",
	I1212 20:28:54.949671  398903 command_runner.go:130] > # ]
	I1212 20:28:54.949741  398903 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 20:28:54.949814  398903 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 20:28:54.950073  398903 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 20:28:54.950143  398903 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:28:54.950211  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.950289  398903 command_runner.go:130] > default_sysctls = [
	I1212 20:28:54.950330  398903 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 20:28:54.950370  398903 command_runner.go:130] > ]
	I1212 20:28:54.950439  398903 command_runner.go:130] > # List of devices on the host that a
	I1212 20:28:54.950465  398903 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:28:54.950518  398903 command_runner.go:130] > # allowed_devices = [
	I1212 20:28:54.950672  398903 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:28:54.950902  398903 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 20:28:54.951150  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951221  398903 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:28:54.951244  398903 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:28:54.951280  398903 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:28:54.951306  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.951324  398903 command_runner.go:130] > # additional_devices = [
	I1212 20:28:54.951343  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951424  398903 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:28:54.951503  398903 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:28:54.951521  398903 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:28:54.951592  398903 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:28:54.951609  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951651  398903 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:28:54.951672  398903 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:28:54.951689  398903 command_runner.go:130] > # Defaults to false.
	I1212 20:28:54.951751  398903 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:28:54.951809  398903 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:28:54.951879  398903 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:28:54.951906  398903 command_runner.go:130] > # hooks_dir = [
	I1212 20:28:54.951934  398903 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:28:54.951952  398903 command_runner.go:130] > # ]
	I1212 20:28:54.952010  398903 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:28:54.952049  398903 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:28:54.952097  398903 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:28:54.952138  398903 command_runner.go:130] > #
	I1212 20:28:54.952160  398903 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:28:54.952191  398903 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:28:54.952262  398903 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:28:54.952281  398903 command_runner.go:130] > #
	I1212 20:28:54.952324  398903 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:28:54.952346  398903 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:28:54.952404  398903 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:28:54.952491  398903 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:28:54.952529  398903 command_runner.go:130] > #
	I1212 20:28:54.952568  398903 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:28:54.952602  398903 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:28:54.952623  398903 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:28:54.952643  398903 command_runner.go:130] > # pids_limit = -1
	I1212 20:28:54.952677  398903 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:28:54.952708  398903 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:28:54.952837  398903 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:28:54.952892  398903 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:28:54.952911  398903 command_runner.go:130] > # log_size_max = -1
	I1212 20:28:54.952955  398903 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 20:28:54.953009  398903 command_runner.go:130] > # log_to_journald = false
	I1212 20:28:54.953062  398903 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:28:54.953088  398903 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:28:54.953123  398903 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:28:54.953149  398903 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:28:54.953170  398903 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:28:54.953206  398903 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:28:54.953299  398903 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:28:54.953339  398903 command_runner.go:130] > # read_only = false
	I1212 20:28:54.953359  398903 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:28:54.953395  398903 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:28:54.953418  398903 command_runner.go:130] > # live configuration reload.
	I1212 20:28:54.953436  398903 command_runner.go:130] > # log_level = "info"
	I1212 20:28:54.953472  398903 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:28:54.953562  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.953601  398903 command_runner.go:130] > # log_filter = ""
	I1212 20:28:54.953622  398903 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953643  398903 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:28:54.953675  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953712  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953763  398903 command_runner.go:130] > # uid_mappings = ""
	I1212 20:28:54.953804  398903 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953825  398903 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:28:54.953843  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953907  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953931  398903 command_runner.go:130] > # gid_mappings = ""
	I1212 20:28:54.953969  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:28:54.954021  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954062  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954085  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954103  398903 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:28:54.954162  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:28:54.954184  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954234  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954322  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954363  398903 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:28:54.954382  398903 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:28:54.954423  398903 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:28:54.954443  398903 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:28:54.954461  398903 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:28:54.954533  398903 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:28:54.954586  398903 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:28:54.954623  398903 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:28:54.954643  398903 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:28:54.954683  398903 command_runner.go:130] > # drop_infra_ctr = true
	I1212 20:28:54.954704  398903 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:28:54.954737  398903 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:28:54.954797  398903 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:28:54.954876  398903 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:28:54.954917  398903 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 20:28:54.954947  398903 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 20:28:54.954967  398903 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 20:28:54.955001  398903 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 20:28:54.955088  398903 command_runner.go:130] > # shared_cpuset = ""
	I1212 20:28:54.955124  398903 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:28:54.955160  398903 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:28:54.955179  398903 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:28:54.955201  398903 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:28:54.955242  398903 command_runner.go:130] > # pinns_path = ""
	I1212 20:28:54.955301  398903 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 20:28:54.955365  398903 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 20:28:54.955383  398903 command_runner.go:130] > # enable_criu_support = true
	I1212 20:28:54.955425  398903 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 20:28:54.955447  398903 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 20:28:54.955466  398903 command_runner.go:130] > # enable_pod_events = false
	I1212 20:28:54.955506  398903 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:28:54.955594  398903 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 20:28:54.955624  398903 command_runner.go:130] > # default_runtime = "crun"
	I1212 20:28:54.955661  398903 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:28:54.955697  398903 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:28:54.955721  398903 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 20:28:54.955790  398903 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:28:54.955868  398903 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:28:54.955891  398903 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:28:54.955927  398903 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:28:54.955946  398903 command_runner.go:130] > # ]
	I1212 20:28:54.955966  398903 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:28:54.956007  398903 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:28:54.956057  398903 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 20:28:54.956117  398903 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 20:28:54.956136  398903 command_runner.go:130] > #
	I1212 20:28:54.956299  398903 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 20:28:54.956391  398903 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 20:28:54.956423  398903 command_runner.go:130] > # runtime_type = "oci"
	I1212 20:28:54.956443  398903 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 20:28:54.956476  398903 command_runner.go:130] > # inherit_default_runtime = false
	I1212 20:28:54.956515  398903 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 20:28:54.956535  398903 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 20:28:54.956555  398903 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 20:28:54.956602  398903 command_runner.go:130] > # monitor_env = []
	I1212 20:28:54.956632  398903 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 20:28:54.956651  398903 command_runner.go:130] > # allowed_annotations = []
	I1212 20:28:54.956673  398903 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 20:28:54.956703  398903 command_runner.go:130] > # no_sync_log = false
	I1212 20:28:54.956730  398903 command_runner.go:130] > # default_annotations = {}
	I1212 20:28:54.956749  398903 command_runner.go:130] > # stream_websockets = false
	I1212 20:28:54.956770  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.956828  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.956858  398903 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 20:28:54.956879  398903 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 20:28:54.956902  398903 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:28:54.956934  398903 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:28:54.956956  398903 command_runner.go:130] > #   in $PATH.
	I1212 20:28:54.956979  398903 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 20:28:54.957012  398903 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:28:54.957045  398903 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 20:28:54.957066  398903 command_runner.go:130] > #   state.
	I1212 20:28:54.957088  398903 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:28:54.957122  398903 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:28:54.957146  398903 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 20:28:54.957169  398903 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 20:28:54.957202  398903 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 20:28:54.957227  398903 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:28:54.957250  398903 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:28:54.957281  398903 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:28:54.957305  398903 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:28:54.957327  398903 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:28:54.957359  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:28:54.957385  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:28:54.957408  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:28:54.957450  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:28:54.957471  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:28:54.957498  398903 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:28:54.957534  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 20:28:54.957557  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 20:28:54.957580  398903 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:28:54.957613  398903 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 20:28:54.957636  398903 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 20:28:54.957657  398903 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 20:28:54.957689  398903 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 20:28:54.957712  398903 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 20:28:54.957733  398903 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 20:28:54.957769  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 20:28:54.957795  398903 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 20:28:54.957816  398903 command_runner.go:130] > #   deprecated option "conmon".
	I1212 20:28:54.957848  398903 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 20:28:54.957870  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 20:28:54.957893  398903 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 20:28:54.957923  398903 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:28:54.957949  398903 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 20:28:54.957971  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 20:28:54.958007  398903 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 20:28:54.958030  398903 command_runner.go:130] > #   conmon-rs by using:
	I1212 20:28:54.958053  398903 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 20:28:54.958092  398903 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 20:28:54.958133  398903 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 20:28:54.958204  398903 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 20:28:54.958225  398903 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 20:28:54.958278  398903 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 20:28:54.958303  398903 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 20:28:54.958340  398903 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 20:28:54.958372  398903 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 20:28:54.958415  398903 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 20:28:54.958449  398903 command_runner.go:130] > #   when a machine crash happens.
	I1212 20:28:54.958472  398903 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 20:28:54.958496  398903 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 20:28:54.958530  398903 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 20:28:54.958560  398903 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 20:28:54.958583  398903 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 20:28:54.958606  398903 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 20:28:54.958635  398903 command_runner.go:130] > #
	I1212 20:28:54.958656  398903 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 20:28:54.958676  398903 command_runner.go:130] > #
	I1212 20:28:54.958708  398903 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 20:28:54.958738  398903 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 20:28:54.958756  398903 command_runner.go:130] > #
	I1212 20:28:54.958778  398903 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 20:28:54.958809  398903 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 20:28:54.958834  398903 command_runner.go:130] > #
	I1212 20:28:54.958854  398903 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 20:28:54.958874  398903 command_runner.go:130] > # feature.
	I1212 20:28:54.958903  398903 command_runner.go:130] > #
	I1212 20:28:54.958934  398903 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 20:28:54.958955  398903 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 20:28:54.958978  398903 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 20:28:54.959015  398903 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 20:28:54.959041  398903 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 20:28:54.959060  398903 command_runner.go:130] > #
	I1212 20:28:54.959092  398903 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 20:28:54.959116  398903 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 20:28:54.959135  398903 command_runner.go:130] > #
	I1212 20:28:54.959166  398903 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 20:28:54.959195  398903 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 20:28:54.959213  398903 command_runner.go:130] > #
	I1212 20:28:54.959234  398903 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 20:28:54.959264  398903 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 20:28:54.959290  398903 command_runner.go:130] > # limitation.
	I1212 20:28:54.959309  398903 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 20:28:54.959329  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 20:28:54.959363  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959390  398903 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 20:28:54.959409  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959429  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959460  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959486  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959503  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959521  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959541  398903 command_runner.go:130] > allowed_annotations = [
	I1212 20:28:54.959574  398903 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 20:28:54.959593  398903 command_runner.go:130] > ]
	I1212 20:28:54.959612  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959644  398903 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:28:54.959671  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 20:28:54.959688  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959705  398903 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:28:54.959727  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959762  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959780  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959800  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959819  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959855  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959872  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959894  398903 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:28:54.959924  398903 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:28:54.959953  398903 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:28:54.959976  398903 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:28:54.960002  398903 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 20:28:54.960047  398903 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 20:28:54.960072  398903 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 20:28:54.960106  398903 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:28:54.960135  398903 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:28:54.960156  398903 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:28:54.960176  398903 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:28:54.960207  398903 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:28:54.960236  398903 command_runner.go:130] > # Example:
	I1212 20:28:54.960257  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:28:54.960281  398903 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:28:54.960315  398903 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:28:54.960337  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:28:54.960356  398903 command_runner.go:130] > # cpuset = "0-1"
	I1212 20:28:54.960392  398903 command_runner.go:130] > # cpushares = "5"
	I1212 20:28:54.960413  398903 command_runner.go:130] > # cpuquota = "1000"
	I1212 20:28:54.960435  398903 command_runner.go:130] > # cpuperiod = "100000"
	I1212 20:28:54.960473  398903 command_runner.go:130] > # cpulimit = "35"
	I1212 20:28:54.960495  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.960507  398903 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:28:54.960516  398903 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:28:54.960522  398903 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:28:54.960542  398903 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:28:54.960555  398903 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:28:54.960563  398903 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:28:54.960568  398903 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 20:28:54.960575  398903 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 20:28:54.960579  398903 command_runner.go:130] > # Default value is set to true
	I1212 20:28:54.960595  398903 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 20:28:54.960602  398903 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 20:28:54.960613  398903 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 20:28:54.960618  398903 command_runner.go:130] > # Default value is set to 'false'
	I1212 20:28:54.960623  398903 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 20:28:54.960637  398903 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 20:28:54.960645  398903 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 20:28:54.960649  398903 command_runner.go:130] > # timezone = ""
	I1212 20:28:54.960656  398903 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:28:54.960661  398903 command_runner.go:130] > #
	I1212 20:28:54.960668  398903 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:28:54.960675  398903 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 20:28:54.960682  398903 command_runner.go:130] > [crio.image]
	I1212 20:28:54.960688  398903 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:28:54.960693  398903 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:28:54.960702  398903 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:28:54.960714  398903 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960719  398903 command_runner.go:130] > # global_auth_file = ""
	I1212 20:28:54.960724  398903 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:28:54.960730  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960738  398903 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.960745  398903 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:28:54.960758  398903 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960764  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960770  398903 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:28:54.960777  398903 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:28:54.960783  398903 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:28:54.960793  398903 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:28:54.960800  398903 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:28:54.960804  398903 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:28:54.960810  398903 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 20:28:54.960819  398903 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 20:28:54.960828  398903 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 20:28:54.960837  398903 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 20:28:54.960843  398903 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 20:28:54.960855  398903 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 20:28:54.960859  398903 command_runner.go:130] > # pinned_images = [
	I1212 20:28:54.960863  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960869  398903 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:28:54.960879  398903 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:28:54.960885  398903 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:28:54.960891  398903 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:28:54.960902  398903 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:28:54.960910  398903 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 20:28:54.960916  398903 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 20:28:54.960923  398903 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 20:28:54.960933  398903 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 20:28:54.960939  398903 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 20:28:54.960948  398903 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 20:28:54.960953  398903 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 20:28:54.960960  398903 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:28:54.960969  398903 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:28:54.960973  398903 command_runner.go:130] > # changing them here.
	I1212 20:28:54.960979  398903 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 20:28:54.960983  398903 command_runner.go:130] > # insecure_registries = [
	I1212 20:28:54.960986  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960995  398903 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:28:54.961006  398903 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:28:54.961012  398903 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:28:54.961020  398903 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:28:54.961026  398903 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:28:54.961032  398903 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 20:28:54.961042  398903 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 20:28:54.961046  398903 command_runner.go:130] > # auto_reload_registries = false
	I1212 20:28:54.961054  398903 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 20:28:54.961062  398903 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 20:28:54.961069  398903 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 20:28:54.961077  398903 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 20:28:54.961082  398903 command_runner.go:130] > # The mode of short name resolution.
	I1212 20:28:54.961089  398903 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 20:28:54.961100  398903 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 20:28:54.961105  398903 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 20:28:54.961112  398903 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 20:28:54.961118  398903 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 20:28:54.961124  398903 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 20:28:54.961132  398903 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 20:28:54.961138  398903 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:28:54.961142  398903 command_runner.go:130] > # CNI plugins.
	I1212 20:28:54.961146  398903 command_runner.go:130] > [crio.network]
	I1212 20:28:54.961152  398903 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:28:54.961159  398903 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:28:54.961164  398903 command_runner.go:130] > # cni_default_network = ""
	I1212 20:28:54.961171  398903 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:28:54.961179  398903 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:28:54.961185  398903 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:28:54.961189  398903 command_runner.go:130] > # plugin_dirs = [
	I1212 20:28:54.961195  398903 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:28:54.961198  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961209  398903 command_runner.go:130] > # List of included pod metrics.
	I1212 20:28:54.961213  398903 command_runner.go:130] > # included_pod_metrics = [
	I1212 20:28:54.961217  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961224  398903 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:28:54.961228  398903 command_runner.go:130] > [crio.metrics]
	I1212 20:28:54.961234  398903 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:28:54.961243  398903 command_runner.go:130] > # enable_metrics = false
	I1212 20:28:54.961248  398903 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:28:54.961253  398903 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:28:54.961262  398903 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:28:54.961271  398903 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:28:54.961280  398903 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:28:54.961285  398903 command_runner.go:130] > # metrics_collectors = [
	I1212 20:28:54.961291  398903 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:28:54.961296  398903 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 20:28:54.961302  398903 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:28:54.961306  398903 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:28:54.961311  398903 command_runner.go:130] > # 	"operations_total",
	I1212 20:28:54.961315  398903 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:28:54.961320  398903 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:28:54.961324  398903 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:28:54.961328  398903 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:28:54.961333  398903 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:28:54.961338  398903 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:28:54.961342  398903 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:28:54.961346  398903 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:28:54.961351  398903 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:28:54.961358  398903 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 20:28:54.961363  398903 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 20:28:54.961374  398903 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 20:28:54.961377  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961383  398903 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 20:28:54.961389  398903 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 20:28:54.961394  398903 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:28:54.961398  398903 command_runner.go:130] > # metrics_port = 9090
	I1212 20:28:54.961404  398903 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:28:54.961409  398903 command_runner.go:130] > # metrics_socket = ""
	I1212 20:28:54.961420  398903 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:28:54.961429  398903 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:28:54.961440  398903 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:28:54.961445  398903 command_runner.go:130] > # certificate on any modification event.
	I1212 20:28:54.961452  398903 command_runner.go:130] > # metrics_cert = ""
	I1212 20:28:54.961458  398903 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:28:54.961464  398903 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:28:54.961470  398903 command_runner.go:130] > # metrics_key = ""
	I1212 20:28:54.961476  398903 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:28:54.961480  398903 command_runner.go:130] > [crio.tracing]
	I1212 20:28:54.961487  398903 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:28:54.961491  398903 command_runner.go:130] > # enable_tracing = false
	I1212 20:28:54.961499  398903 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:28:54.961504  398903 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 20:28:54.961513  398903 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 20:28:54.961520  398903 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:28:54.961527  398903 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 20:28:54.961530  398903 command_runner.go:130] > [crio.nri]
	I1212 20:28:54.961534  398903 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 20:28:54.961544  398903 command_runner.go:130] > # enable_nri = true
	I1212 20:28:54.961548  398903 command_runner.go:130] > # NRI socket to listen on.
	I1212 20:28:54.961553  398903 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 20:28:54.961559  398903 command_runner.go:130] > # NRI plugin directory to use.
	I1212 20:28:54.961564  398903 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 20:28:54.961569  398903 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 20:28:54.961574  398903 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 20:28:54.961579  398903 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 20:28:54.961660  398903 command_runner.go:130] > # nri_disable_connections = false
	I1212 20:28:54.961672  398903 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 20:28:54.961678  398903 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 20:28:54.961683  398903 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 20:28:54.961689  398903 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 20:28:54.961696  398903 command_runner.go:130] > # NRI default validator configuration.
	I1212 20:28:54.961703  398903 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 20:28:54.961717  398903 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 20:28:54.961722  398903 command_runner.go:130] > # can be restricted/rejected:
	I1212 20:28:54.961728  398903 command_runner.go:130] > # - OCI hook injection
	I1212 20:28:54.961734  398903 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 20:28:54.961740  398903 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 20:28:54.961747  398903 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 20:28:54.961752  398903 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 20:28:54.961759  398903 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 20:28:54.961766  398903 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 20:28:54.961775  398903 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 20:28:54.961779  398903 command_runner.go:130] > #
	I1212 20:28:54.961783  398903 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 20:28:54.961791  398903 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 20:28:54.961796  398903 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 20:28:54.961802  398903 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 20:28:54.961810  398903 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 20:28:54.961815  398903 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 20:28:54.961821  398903 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 20:28:54.961828  398903 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 20:28:54.961831  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961838  398903 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 20:28:54.961845  398903 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:28:54.961851  398903 command_runner.go:130] > [crio.stats]
	I1212 20:28:54.961860  398903 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:28:54.961866  398903 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:28:54.961872  398903 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:28:54.961879  398903 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 20:28:54.961889  398903 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 20:28:54.961894  398903 command_runner.go:130] > # collection_period = 0
	I1212 20:28:54.961945  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912485774Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 20:28:54.961961  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912523214Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 20:28:54.961978  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912551908Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 20:28:54.961989  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912577237Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 20:28:54.962000  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912661332Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.962016  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912929282Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 20:28:54.962028  398903 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:28:54.962158  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:54.962172  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:54.962187  398903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:28:54.962211  398903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:28:54.962351  398903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:28:54.962430  398903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:28:54.969281  398903 command_runner.go:130] > kubeadm
	I1212 20:28:54.969300  398903 command_runner.go:130] > kubectl
	I1212 20:28:54.969304  398903 command_runner.go:130] > kubelet
	I1212 20:28:54.970141  398903 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:28:54.970208  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:28:54.977797  398903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:28:54.990948  398903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:28:55.010887  398903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 20:28:55.035195  398903 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:28:55.039688  398903 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 20:28:55.039770  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.162925  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:55.180455  398903 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:28:55.180486  398903 certs.go:195] generating shared ca certs ...
	I1212 20:28:55.180503  398903 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.180666  398903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:28:55.180714  398903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:28:55.180726  398903 certs.go:257] generating profile certs ...
	I1212 20:28:55.180830  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:28:55.180895  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:28:55.180950  398903 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:28:55.180963  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:28:55.180976  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:28:55.180993  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:28:55.181015  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:28:55.181034  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:28:55.181047  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:28:55.181062  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:28:55.181077  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:28:55.181130  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:28:55.181167  398903 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:28:55.181180  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:28:55.181208  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:28:55.181238  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:28:55.181263  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:28:55.181322  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:55.181358  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.181374  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.181387  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.181918  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:28:55.205330  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:28:55.228282  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:28:55.247851  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:28:55.266269  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:28:55.284183  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:28:55.302120  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:28:55.319891  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:28:55.338073  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:28:55.356708  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:28:55.374821  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:28:55.392459  398903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:28:55.405239  398903 ssh_runner.go:195] Run: openssl version
	I1212 20:28:55.411334  398903 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 20:28:55.411437  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.418985  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:28:55.426485  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430183  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430452  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430510  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.471108  398903 command_runner.go:130] > b5213941
	I1212 20:28:55.471637  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:28:55.479292  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.486905  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:28:55.494608  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498479  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498582  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498669  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.541933  398903 command_runner.go:130] > 51391683
	I1212 20:28:55.542454  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:28:55.550083  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.558343  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:28:55.567964  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571832  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571862  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571932  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.617329  398903 command_runner.go:130] > 3ec20f2e
	I1212 20:28:55.617911  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:28:55.625593  398903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629390  398903 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629419  398903 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 20:28:55.629427  398903 command_runner.go:130] > Device: 259,1	Inode: 1315224     Links: 1
	I1212 20:28:55.629433  398903 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:55.629439  398903 command_runner.go:130] > Access: 2025-12-12 20:24:47.845478497 +0000
	I1212 20:28:55.629445  398903 command_runner.go:130] > Modify: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629449  398903 command_runner.go:130] > Change: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629454  398903 command_runner.go:130] >  Birth: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629525  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:28:55.669986  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.670463  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:28:55.711204  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.711650  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:28:55.751880  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.752298  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:28:55.793260  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.793349  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:28:55.836082  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.836162  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:28:55.878637  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.879114  398903 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:55.879241  398903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:28:55.879321  398903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:28:55.906646  398903 cri.go:89] found id: ""
	I1212 20:28:55.906721  398903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:28:55.913746  398903 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 20:28:55.913771  398903 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 20:28:55.913778  398903 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 20:28:55.914790  398903 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:28:55.914807  398903 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:28:55.914874  398903 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:28:55.922292  398903 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:28:55.922687  398903 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-261311" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.922785  398903 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "functional-261311" cluster setting kubeconfig missing "functional-261311" context setting]
	I1212 20:28:55.923055  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.923461  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.923610  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.924164  398903 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:28:55.924185  398903 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:28:55.924192  398903 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:28:55.924198  398903 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:28:55.924202  398903 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:28:55.924512  398903 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:28:55.924617  398903 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:28:55.932459  398903 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:28:55.932497  398903 kubeadm.go:602] duration metric: took 17.683266ms to restartPrimaryControlPlane
	I1212 20:28:55.932527  398903 kubeadm.go:403] duration metric: took 53.402973ms to StartCluster
	I1212 20:28:55.932549  398903 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.932634  398903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.933272  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.933478  398903 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:28:55.933879  398903 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:28:55.933961  398903 addons.go:70] Setting storage-provisioner=true in profile "functional-261311"
	I1212 20:28:55.933975  398903 addons.go:239] Setting addon storage-provisioner=true in "functional-261311"
	I1212 20:28:55.933999  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.933941  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:55.934065  398903 addons.go:70] Setting default-storageclass=true in profile "functional-261311"
	I1212 20:28:55.934077  398903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-261311"
	I1212 20:28:55.934349  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.934437  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.939847  398903 out.go:179] * Verifying Kubernetes components...
	I1212 20:28:55.942718  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.970904  398903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:28:55.971648  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.971825  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.972098  398903 addons.go:239] Setting addon default-storageclass=true in "functional-261311"
	I1212 20:28:55.972128  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.972592  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.974802  398903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:55.974826  398903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:28:55.974884  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.016147  398903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.016169  398903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:28:56.016234  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.029989  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.052293  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.147892  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:56.182806  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.199875  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:56.957368  398903 node_ready.go:35] waiting up to 6m0s for node "functional-261311" to be "Ready" ...
	I1212 20:28:56.957463  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957488  398903 type.go:168] "Request Body" body=""
	I1212 20:28:56.957545  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1212 20:28:56.957546  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957630  398903 retry.go:31] will retry after 313.594755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957713  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:56.957754  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957788  398903 retry.go:31] will retry after 317.565464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.272396  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.275890  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.344322  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.344435  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.344471  398903 retry.go:31] will retry after 221.297028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351139  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.351181  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351200  398903 retry.go:31] will retry after 309.802672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.458417  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.458511  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.566100  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.625592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.625687  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.625728  398903 retry.go:31] will retry after 499.665469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.661822  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.729487  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.729527  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.729550  398903 retry.go:31] will retry after 503.664724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.958134  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.958421  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.126013  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:58.197757  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.197828  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.197853  398903 retry.go:31] will retry after 1.10540153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.234015  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:58.297441  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.297548  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.297576  398903 retry.go:31] will retry after 1.092264057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:28:58.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:28:59.303542  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:59.364708  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.364773  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.364796  398903 retry.go:31] will retry after 1.503349263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.390910  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:59.449881  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.449970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.450009  398903 retry.go:31] will retry after 1.024940216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.457981  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.458049  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.458335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:59.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.957671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.957942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.457683  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.475497  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:00.543993  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.544048  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.544072  398903 retry.go:31] will retry after 2.24833219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.868438  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:00.926476  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.930138  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.930173  398903 retry.go:31] will retry after 1.556562441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.958315  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.958392  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:00.958787  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:01.458585  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.458668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.458995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:01.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.958122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.457889  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.457969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.458299  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.487755  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:02.545597  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.549667  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.549705  398903 retry.go:31] will retry after 1.726891228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.793114  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:02.856403  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.860058  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.860101  398903 retry.go:31] will retry after 3.686133541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.958383  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.958453  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.958724  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:03.458506  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.458589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.458945  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:03.459000  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:03.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.958210  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.277666  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:04.331675  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:04.335668  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.335700  398903 retry.go:31] will retry after 4.014847664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.457944  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.458019  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.458285  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.457751  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.457828  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.958009  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.958416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:05.958469  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:06.458265  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:06.546991  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:06.607592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:06.607644  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.607664  398903 retry.go:31] will retry after 4.884355554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.958195  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.958538  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.458326  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.458394  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.458746  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.958480  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.958781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:07.958832  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:08.351452  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:08.404529  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:08.407970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.408008  398903 retry.go:31] will retry after 4.723006947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.458208  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.458304  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:08.958349  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.958418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.458637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.458962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.957658  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.958100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:10.458537  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.458602  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.458869  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:10.458910  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:10.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.458416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.492814  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:11.557889  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:11.557940  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.557960  398903 retry.go:31] will retry after 4.177574733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.958412  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.958494  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.958766  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:12.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.458627  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.458916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:12.458972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:12.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.958047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.131713  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:13.192350  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:13.192414  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.192433  398903 retry.go:31] will retry after 8.846505763s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.957726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.457780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.457878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.458172  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.957968  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.958296  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:14.958356  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:15.457665  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.457745  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.458081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:15.737088  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:15.794323  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:15.794363  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.794386  398903 retry.go:31] will retry after 13.823463892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.958001  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.958077  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.958395  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.458178  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.458264  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.458517  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.958364  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:16.958807  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:17.458384  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.458800  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:17.958573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.958679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.958934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:19.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:19.458044  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:19.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.457635  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.458035  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.957568  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.957646  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:21.457974  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.458051  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.458401  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:21.458459  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:21.958216  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.958620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.040027  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:22.098166  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:22.102301  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.102333  398903 retry.go:31] will retry after 9.311877294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.458542  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.458608  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.458864  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.957965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.957780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.957869  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.958143  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:23.958184  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:24.457666  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.457740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:24.957754  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.957831  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.457956  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.958502  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.958583  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:25.958993  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:26.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.458131  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:26.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.957860  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.958177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.457614  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.457693  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.957616  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:28.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.458119  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:28.458170  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:28.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.957713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.457661  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.458113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.618498  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:29.673247  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:29.677091  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.677126  398903 retry.go:31] will retry after 12.247484069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.958487  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.958556  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.958828  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.957764  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:30.958221  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:31.415106  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:31.457708  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.457795  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:31.477657  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:31.481452  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.481486  398903 retry.go:31] will retry after 29.999837192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.958329  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.958678  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.458335  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.458415  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.958367  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.958440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.958702  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:32.958743  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:33.458498  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.458574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.458942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:33.957518  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.957939  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.457617  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.457695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.957613  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:35.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.458075  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:35.458135  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:35.957713  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.457989  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.458070  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.458457  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.958268  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.958361  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.958681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:37.458419  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.458489  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.458760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:37.458803  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:37.958548  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.958989  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.457703  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.457783  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.458130  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.957909  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:39.958142  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:40.458512  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.458875  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:40.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.957663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.957999  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.458005  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.458079  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.458415  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.924900  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:41.958510  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.958584  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.958850  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:41.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:42.001052  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:42.001094  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.001115  398903 retry.go:31] will retry after 30.772279059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.457672  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.457755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.458082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:42.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.458540  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.458610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.458870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.957586  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.958032  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:44.457633  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.457707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.458045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:44.458100  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:44.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.958170  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.457726  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.458152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.957997  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.958445  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:46.458286  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.458355  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.458622  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:46.458663  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:46.958455  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.958553  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.958947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.457794  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.457932  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.458463  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.958292  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.958370  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:48.458483  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.458899  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:48.458971  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:48.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.958090  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.457649  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.457920  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.957681  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.958050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.457756  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.457838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.458163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.957983  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:50.958033  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:51.457978  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.458054  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.458398  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:51.958201  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.958282  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.958598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.458345  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.458418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.958540  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.958883  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:52.958945  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:53.457615  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.457698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:53.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.957674  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.957892  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.958225  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:55.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.457654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.457934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:55.457987  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:55.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.958319  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.458108  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.458185  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.458525  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.958317  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.958572  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:57.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:57.458880  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:57.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.957685  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.457591  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.457943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.957737  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.958104  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.457826  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.457924  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.458273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.958054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:59.958118  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:00.457778  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.457870  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:00.958235  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.958755  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.460861  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.460950  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.461277  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.481640  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:01.559465  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:01.559521  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.559544  398903 retry.go:31] will retry after 33.36515596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.958099  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.958188  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:01.958533  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:02.458305  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.458381  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.458719  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:02.958386  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.958745  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.457694  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:04.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.458056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:04.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:04.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.958103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.457691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.457777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.458124  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.958166  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.958257  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.958561  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:06.458375  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.458451  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.458788  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:06.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:06.957529  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.957955  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.457552  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.457657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.957700  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.957780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.457728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.458065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.957730  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:08.958162  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:09.457851  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.457929  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.458309  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:09.958049  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.958147  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.958566  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.458707  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.958517  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.958916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:10.958976  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:11.457913  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.458009  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.458358  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:11.958078  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.958148  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.958429  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.458295  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.458371  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.458726  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.774318  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:12.840421  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:12.840464  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.840483  398903 retry.go:31] will retry after 30.011296842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.957679  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.957756  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:13.457610  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:13.457978  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:13.957691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.957779  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.958199  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.457821  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.458184  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.958021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:15.457670  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.458088  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:15.458148  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:15.958126  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.958215  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.958644  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.458429  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.458692  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.958433  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.958508  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.958865  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:17.458563  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.458662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.459072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:17.459137  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:17.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.957765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.957740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.958158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.457570  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.457653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.957747  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:19.958157  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:20.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.458135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:20.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.957690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.958023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.458249  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.458570  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.958397  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.958474  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.958860  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:21.958919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:22.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.457650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.457962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:22.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.957818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.958168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:24.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:24.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:24.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.957748  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.958123  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.457534  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.457604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.457872  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.958565  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.958637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.958933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:26.457975  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.458048  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.458392  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:26.458450  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:26.957925  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.957996  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.958288  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.457662  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.458086  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.957807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.957887  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.958218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.957686  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.957778  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.958129  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:28.958185  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:29.457860  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.457948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.458268  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:29.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.957934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.457654  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.957859  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:30.958301  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:31.458270  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.458363  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.458639  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:31.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.958925  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.457675  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.957526  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.957599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.957876  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:33.457638  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:33.458151  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:33.957835  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.957912  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.457709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.458076  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.925852  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:34.958350  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.958426  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.958704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.987024  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990602  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990708  398903 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:35.458275  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.458681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:35.458739  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:35.958407  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.958762  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.457712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.458038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.457790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.957761  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.958213  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:37.958272  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:38.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.458016  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:38.958134  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.958210  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.958478  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.458248  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.458336  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.458729  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.958456  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.958539  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.958888  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:39.958942  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:40.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.457648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.457967  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:40.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.958059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.958252  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.958327  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.958608  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:42.458416  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.458492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.458825  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:42.458889  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:42.852572  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:42.917565  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921658  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921759  398903 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:42.924799  398903 out.go:179] * Enabled addons: 
	I1212 20:30:42.926930  398903 addons.go:530] duration metric: took 1m46.993054127s for enable addons: enabled=[]
	I1212 20:30:42.957819  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.957896  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.958219  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.457528  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.457600  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.458022  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.957587  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.957941  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:44.957982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:45.457697  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.457796  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.458121  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:45.958191  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.958612  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.458444  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.458532  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.957599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.958064  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:46.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:47.457807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.458266  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:47.957963  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.958044  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.958323  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.457878  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.457954  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.458353  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.957937  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.958025  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.958407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:48.958465  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:49.458150  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.458217  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.458483  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:49.958339  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.958422  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.958782  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.457522  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.457619  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:51.457956  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.458033  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.458372  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:51.458436  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:51.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.958760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.458531  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.458606  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.458887  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.957701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.457803  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.457880  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.458232  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.957948  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.958039  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.958314  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:53.958357  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:54.458007  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.458120  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.458562  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:54.957657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.957767  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.958125  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.457599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.457671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.958592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:55.959020  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:56.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.457702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:56.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.957655  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.957949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.457710  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.458063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.958430  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.958528  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.958868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:58.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:58.458062  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:58.957718  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.958154  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.457651  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.957798  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.957888  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.958201  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:00.457692  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.457780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:00.458250  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:00.957940  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.958024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.458223  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.458299  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.458574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.958306  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.958388  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.958736  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:02.458565  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.458645  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.459016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:02.459076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:02.957720  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.457664  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.957853  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.957937  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.958274  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.457595  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.458030  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.957597  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:04.958098  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:05.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.457701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:05.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.957863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.958194  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.458145  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.458228  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.958415  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.958493  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.958820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:06.958879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:07.457506  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.457575  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.457849  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:07.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.957714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.457776  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.457879  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.458223  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.957577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.957652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:09.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.457705  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:09.458076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:09.957794  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.957907  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.958279  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.457971  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.458382  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.958220  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.958714  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:11.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:11.458138  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:11.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.458031  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.957743  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.957841  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:13.458376  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.458443  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.458763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:13.458818  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:13.958577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.958652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.958977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.458101  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.957799  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.957875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.958197  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.457653  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.458080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.958204  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.958537  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:15.958599  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:16.458429  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.458501  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:16.957534  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.957617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.957998  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.457728  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.457806  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.458115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.957591  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:18.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.457847  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.458133  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:18.458180  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:18.957696  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.457727  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.458140  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.957742  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.457686  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.957650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.957923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:20.957972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:21.457915  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.457990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.458320  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:21.958165  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.958276  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.958607  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.458365  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.458716  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.958558  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.958659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.959007  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:22.959071  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:23.457766  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:23.957896  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.957969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.958315  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.457613  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.457714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.958115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:25.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:25.458017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:25.958041  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.958123  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.958512  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.458319  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.458398  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.958549  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.958846  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:27.457587  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.457677  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.457993  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:27.458047  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:27.957637  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.457523  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.457597  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.957667  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.957755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:29.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.458112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:29.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:29.957515  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.957590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.957922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.458057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.957854  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:31.458036  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.458104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:31.458409  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:31.958181  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.958643  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.458473  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.458949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.958012  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.457738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.957824  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.957905  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.958247  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:33.958303  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:34.458003  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.458078  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.458409  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:34.958240  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.958349  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.458572  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.458682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.459077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.958480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.958555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.958847  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:35.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:36.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.458167  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:36.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.957948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.958275  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.457594  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.958057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:38.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:38.458189  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:38.957510  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.957592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.957862  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.457578  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.457664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.957715  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.958106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.457964  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.958114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:40.958173  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:41.457926  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.458028  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.458354  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:41.958180  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.958256  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.958548  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.458349  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.458439  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.458833  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.958514  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.958594  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.958932  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:42.958992  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:43.457618  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.458058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:43.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.958071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.457779  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.457857  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.458177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.957657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:45.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.458010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:45.458070  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:45.957784  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.957877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.458071  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.458414  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.958212  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.958295  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.958642  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:47.458480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.458558  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.458926  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:47.458982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:47.957584  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.957658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.957921  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.457764  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.458171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.957862  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.957972  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.958326  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.458004  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.458083  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.458381  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.958209  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.958290  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.958636  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:49.958695  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:50.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.458818  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:50.957496  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.957563  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.458084  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.957648  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:52.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.457781  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.458111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:52.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:52.957662  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.957750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.457800  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.457898  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.458256  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.958171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:54.958225  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:55.457602  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.457942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:55.957857  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.957935  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.458155  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.458540  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.958285  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.958359  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.958625  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:56.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:57.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.458823  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:57.958474  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.958559  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.457647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.457965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:59.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:59.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:59.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.957976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.457722  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.457811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.458158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.958017  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.958101  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:01.458294  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.458366  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.458700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:01.458759  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:01.958578  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.958660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.959010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.957736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.958135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:03.958124  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:04.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.457689  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:04.957738  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.957816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.457928  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.458292  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.958124  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.958202  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.958466  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:05.958511  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:06.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.458469  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.458820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:06.957560  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.958040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.457620  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.457897  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.957602  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:08.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.458006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:08.458064  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:08.958540  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.958617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.958908  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.457660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.458015  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.958016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.457990  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.958058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:10.958119  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:11.458077  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.458157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:11.958236  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.958308  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.958586  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.458497  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.458856  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.957638  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:13.460759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.460830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.461068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:13.461109  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:13.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.957849  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.958216  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.957890  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.957960  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.958230  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.458122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.957985  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.958378  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:15.958434  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:16.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.458504  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:16.958300  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.958386  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.958758  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.458639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.458986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.957715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.958109  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:18.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.458061  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:18.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:18.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.457938  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.957777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.958136  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.458047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.957741  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.957811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:20.958125  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:21.458048  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.458126  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.458473  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:21.958279  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.458484  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.458765  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.958550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:22.959017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:23.457629  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:23.957725  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.957800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.958134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:25.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:25.458090  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:25.958111  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.958187  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.958536  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.458306  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.458383  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.458747  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.958505  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.958576  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.958841  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:27.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:27.458127  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:27.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.957874  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.958233  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.457931  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.457998  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.458263  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.957554  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.957977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.457711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.957530  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.957906  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:29.957953  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:30.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.458040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:30.957778  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.458073  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.458140  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.458418  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.958203  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.958278  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.958617  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:31.958671  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:32.458448  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.458537  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.458868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:32.957533  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.957933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.458036  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:34.457588  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.457997  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:34.458054  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:34.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.957770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:36.458166  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.458243  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.458598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:36.458654  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:36.958444  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.958533  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.958889  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.458453  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.458552  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.458884  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.957686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.457739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.957536  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.957905  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:38.957951  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:39.457634  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:39.957793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.957878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.458558  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.458626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.458896  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:40.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:41.457917  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.458003  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.458345  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:41.958008  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.958090  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.958391  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.458186  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.458268  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.458645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.958471  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.958551  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.958913  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:42.958969  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:43.457567  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.457639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.457970  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:43.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.958127  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.457848  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.457925  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.458300  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.957921  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.957989  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.958269  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:45.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:45.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:45.957919  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.458249  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.958392  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.958479  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.457637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.457976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.957652  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.957996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:47.958035  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:48.457660  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.458085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:48.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.958068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.457759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.458095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.957718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:49.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:50.457791  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.457875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.458204  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:50.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.957654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.457942  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.458024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.958377  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.958463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.958946  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:51.959008  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:52.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:52.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.457745  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.457818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.458155  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.958157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.958497  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:54.458351  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.458785  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:54.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:54.957837  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.957927  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.958377  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.458049  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.958082  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.958157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.958506  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.458323  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.458789  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.958570  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.958641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.958907  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:56.958949  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:57.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:57.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.457771  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.458182  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.957910  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.957990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.958333  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:59.458167  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.458246  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.458600  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:59.458673  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:59.958419  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.958763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.458626  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.458718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.459178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.957917  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.957999  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.958339  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.458146  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.458227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.458496  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.958324  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:01.958746  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:02.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.458595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.458922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:02.957588  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.457658  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.957689  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.957766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:04.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:04.458057  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:04.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.958097  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.957795  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.957876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:06.458126  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.458201  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.458609  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:06.458666  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:06.958431  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.958510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.958861  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.458432  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.458505  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.958549  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.958631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.958975  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.457744  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.458100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.957714  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.957786  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:08.958096  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:09.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.458145  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:09.957623  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.957707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.457729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.458029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:10.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:11.457959  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.458036  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.458394  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:11.958170  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.958549  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.458358  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.458775  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.957520  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.957604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.957972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:13.458501  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.458572  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.458848  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:13.458891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:13.957574  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.457577  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.457656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.957521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.957928  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.457515  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.457593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.957742  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.957819  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:15.958212  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:16.457912  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.458249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:16.957938  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.958013  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.958371  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.458356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.957551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.957895  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:18.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:18.458060  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:18.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.457757  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.457827  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:20.457628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.458050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:20.458103  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:20.957580  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.457718  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.457793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.458138  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.957933  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.958282  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:22.457957  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.458031  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.458362  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:22.458419  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:22.958162  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.958237  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.958574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.458385  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.458462  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.958452  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.958525  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.958802  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:24.458538  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.458623  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.458972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:24.459028  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:24.957567  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.957987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.957886  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.957967  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.958322  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.458268  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.958389  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.958460  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.958721  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:26.958761  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:27.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.458621  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.458969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:27.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.957682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.958006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.457642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.457915  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.957711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:29.457799  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.457877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.458218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:29.458292  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:29.957566  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.957640  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.957986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.457705  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.457788  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.957840  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.957922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.958258  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:31.458070  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.458149  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.458407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:31.458480  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:31.958244  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.958322  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.958670  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.458475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.458902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.958550  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.457551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.457948  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:33.958117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:34.457524  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.457599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.457902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:34.957627  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.957704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.958079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.457914  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.458250  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.958142  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.958225  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.958508  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:35.958562  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:36.458394  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.458478  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.458822  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:36.957589  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.457586  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.958113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:38.457820  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.458236  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:38.458295  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:38.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.957699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.958001  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.457722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.958083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.457768  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.457840  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.458168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.957758  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:40.958231  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:41.458222  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.458298  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.458630  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:41.958341  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.958427  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.958700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.458591  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.458943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:43.457746  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.457813  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.458089  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:43.458129  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:43.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.957883  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.457980  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.458055  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.458393  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.958151  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.958223  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:45.458269  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.458343  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.458708  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:45.458764  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:45.958513  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.958931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.457565  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.457633  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.957631  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.958128  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.457922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.458245  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.957545  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.957618  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:47.957963  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:48.457643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:48.957629  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.457729  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.458103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.957633  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:49.958114  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:50.457640  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:50.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.458156  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.458244  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.458588  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.958840  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:51.958897  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:52.458422  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.458781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:52.958521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.958596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.958935  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.457641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.457994  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.957675  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.957749  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.958046  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:54.457737  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.457815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.458164  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:54.458229  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:54.957758  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.958073  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.958151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.958481  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:56.458356  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.458518  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.458867  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:56.458919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:56.958475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.958546  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.958806  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.457573  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.457662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.957708  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.958149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.457519  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.457596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.957618  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.957702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:58.958086  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:59.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.457717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.458079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:59.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.957695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.958025  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.457770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.458220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.957723  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.957815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.958152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:00.958209  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:01.458053  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.458124  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.458397  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:01.958241  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.958318  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.458431  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.458517  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.458903  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.958593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.958871  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:02.958913  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:03.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.457665  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.458014  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:03.957750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.958178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.457755  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.458106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.957792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.957872  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.958222  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:05.457932  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.458011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.458316  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:05.458363  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:05.958224  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.958347  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.958674  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.457554  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.457980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.958087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.457764  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.457837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.458126  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:07.958131  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:08.457790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.457867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.458190  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:08.957583  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.958018  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.457986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.957661  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:10.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.458044  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:10.458120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:10.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.958069  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.457925  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.458005  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.458337  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.957987  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.457716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.957844  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.958153  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:12.958206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:13.457572  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.457652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:13.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.957752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.458033  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.957980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:15.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.457800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.458149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:15.458206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.958356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.458302  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.458374  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.458653  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.958451  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.958529  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.958870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.457741  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.957571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.958005  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:17.958058  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:18.457731  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.457820  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.458202  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:18.957933  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.958011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.457582  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.457658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.457973  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.958037  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:19.958084  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:20.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.457726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:20.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.957830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.458132  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.458454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.958169  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.958248  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.958614  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:21.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:22.458387  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.458712  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:22.958495  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.958574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.958894  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.957931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:24.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:24.458117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:24.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.958072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.458023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.958118  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.958454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:26.458388  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.458463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:26.458879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:26.958476  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.958814  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.458579  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.458656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.458987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.957727  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.957802  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.958162  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.458439  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.458510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.458774  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.958512  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.958589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.958911  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:28.958974  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:29.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:29.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.958161  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.457641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.458083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.958024  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:31.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.458012  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.458336  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:31.458388  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:31.958144  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.958581  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.458466  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.458569  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.458930  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.957985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.957814  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.957889  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.958221  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:33.958279  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:34.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.457651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:34.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.957724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.457792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.457876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.958034  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.958104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.958369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:35.958411  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:36.458355  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.458432  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.458815  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:36.957543  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.957626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.957947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.457995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.957635  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:38.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.458116  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:38.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:38.957684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.957762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.957975  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.958305  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.457659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:40.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:41.457945  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.458029  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.458375  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:41.958149  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.958218  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.958489  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.458344  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.458797  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.957548  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.958002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:43.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:43.458139  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:43.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.457863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.458214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.957493  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.957567  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.457549  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.457634  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.957790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.958220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:45.958281  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:46.458047  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.458139  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.458408  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:46.958199  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.958280  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.958672  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.458502  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.458578  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.458923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.957667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.958000  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:48.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:48.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:48.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.457750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.458132  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.957700  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:50.457775  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.457853  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.458187  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:50.458247  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:50.957570  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.957642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.957959  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.457904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.458001  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.458321  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.457677  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.458071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:52.958126  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:53.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:53.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.457816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.458178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.957898  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.958335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:54.958392  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:55.457874  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.457957  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.461901  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:34:55.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.957835  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.958180  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.458205  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.458289  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:56.458646  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.958348  398903 node_ready.go:38] duration metric: took 6m0.000942014s for node "functional-261311" to be "Ready" ...
	I1212 20:34:56.961249  398903 out.go:203] 
	W1212 20:34:56.963984  398903 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:34:56.964005  398903 out.go:285] * 
	W1212 20:34:56.966156  398903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:34:56.969023  398903 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645916855Z" level=info msg="Using the internal default seccomp profile"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645924379Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645930903Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645936753Z" level=info msg="RDT not available in the host system"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645950013Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.646683583Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.646710381Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.64672831Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.647590316Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.647612594Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.647752583Z" level=info msg="Updated default CNI network name to "
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.648322057Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.648859975Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.648918872Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697369859Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697535129Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697630006Z" level=info msg="Create NRI interface"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697796219Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697818832Z" level=info msg="runtime interface created"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697832838Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697839345Z" level=info msg="runtime interface starting up..."
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697845639Z" level=info msg="starting plugins..."
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697862041Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697933098Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:28:54 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:34:58.991274    8589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:34:58.992036    8589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:34:58.993624    8589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:34:58.993942    8589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:34:58.995446    8589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:34:59 up  3:17,  0 user,  load average: 0.37, 0.31, 0.91
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:34:56 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:34:56 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1136.
	Dec 12 20:34:56 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:34:56 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:34:57 functional-261311 kubelet[8479]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:34:57 functional-261311 kubelet[8479]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:34:57 functional-261311 kubelet[8479]: E1212 20:34:57.028730    8479 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:34:57 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:34:57 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:34:57 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1137.
	Dec 12 20:34:57 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:34:57 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:34:57 functional-261311 kubelet[8484]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:34:57 functional-261311 kubelet[8484]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:34:57 functional-261311 kubelet[8484]: E1212 20:34:57.772491    8484 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:34:57 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:34:57 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:34:58 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1138.
	Dec 12 20:34:58 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:34:58 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:34:58 functional-261311 kubelet[8505]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:34:58 functional-261311 kubelet[8505]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:34:58 functional-261311 kubelet[8505]: E1212 20:34:58.523263    8505 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:34:58 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:34:58 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (609.685969ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (369.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-261311 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-261311 get po -A: exit status 1 (67.766125ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-261311 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?\n"*: args "kubectl --context functional-261311 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-261311 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (356.162945ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 logs -n 25: (1.069615017s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205528 image rm kicbase/echo-server:functional-205528 --alsologtostderr                                                                │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                               │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image save --daemon kicbase/echo-server:functional-205528 --alsologtostderr                                                     │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/364853.pem                                                                                          │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /usr/share/ca-certificates/364853.pem                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/test/nested/copy/364853/hosts                                                                                 │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                          │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/3648532.pem                                                                                         │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /usr/share/ca-certificates/3648532.pem                                                                             │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                          │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format short --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format yaml --alsologtostderr                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ ssh            │ functional-205528 ssh pgrep buildkitd                                                                                                             │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ image          │ functional-205528 image ls --format json --alsologtostderr                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr                                            │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format table --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ delete         │ -p functional-205528                                                                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ start          │ -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ start          │ -p functional-261311 --alsologtostderr -v=8                                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:28 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:28:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:28:51.200639  398903 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:28:51.200813  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.200825  398903 out.go:374] Setting ErrFile to fd 2...
	I1212 20:28:51.200844  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.201121  398903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:28:51.201526  398903 out.go:368] Setting JSON to false
	I1212 20:28:51.202423  398903 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11484,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:28:51.202499  398903 start.go:143] virtualization:  
	I1212 20:28:51.205894  398903 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:28:51.209621  398903 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:28:51.209743  398903 notify.go:221] Checking for updates...
	I1212 20:28:51.215382  398903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:28:51.218267  398903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:51.221168  398903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:28:51.224043  398903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:28:51.227018  398903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:28:51.230467  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:51.230581  398903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:28:51.269738  398903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:28:51.269857  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.341809  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.330621143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.341929  398903 docker.go:319] overlay module found
	I1212 20:28:51.347026  398903 out.go:179] * Using the docker driver based on existing profile
	I1212 20:28:51.349898  398903 start.go:309] selected driver: docker
	I1212 20:28:51.349928  398903 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.350015  398903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:28:51.350136  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.408041  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.398420734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.408534  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:51.408600  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:51.408656  398903 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.413511  398903 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:28:51.416491  398903 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:28:51.419403  398903 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:28:51.422306  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:51.422357  398903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:28:51.422368  398903 cache.go:65] Caching tarball of preloaded images
	I1212 20:28:51.422458  398903 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:28:51.422471  398903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:28:51.422591  398903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:28:51.422818  398903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:28:51.441630  398903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:28:51.441653  398903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:28:51.441676  398903 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:28:51.441708  398903 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:28:51.441778  398903 start.go:364] duration metric: took 45.9µs to acquireMachinesLock for "functional-261311"
	I1212 20:28:51.441803  398903 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:28:51.441812  398903 fix.go:54] fixHost starting: 
	I1212 20:28:51.442073  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:51.469956  398903 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:28:51.469989  398903 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:28:51.473238  398903 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:28:51.473304  398903 machine.go:94] provisionDockerMachine start ...
	I1212 20:28:51.473396  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.494630  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.494961  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.494976  398903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:28:51.648147  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.648174  398903 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:28:51.648237  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.668778  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.669090  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.669106  398903 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:28:51.829776  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.829853  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.848648  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.848971  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.848987  398903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:28:52.002627  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:28:52.002659  398903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:28:52.002689  398903 ubuntu.go:190] setting up certificates
	I1212 20:28:52.002713  398903 provision.go:84] configureAuth start
	I1212 20:28:52.002795  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:52.023958  398903 provision.go:143] copyHostCerts
	I1212 20:28:52.024006  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024050  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:28:52.024064  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024145  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:28:52.024243  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024271  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:28:52.024280  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024310  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:28:52.024357  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024421  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:28:52.024431  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024463  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:28:52.024521  398903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:28:52.567706  398903 provision.go:177] copyRemoteCerts
	I1212 20:28:52.567776  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:28:52.567821  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.585858  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:52.692768  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:28:52.692828  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:28:52.711466  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:28:52.711534  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:28:52.730742  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:28:52.730815  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:28:52.749109  398903 provision.go:87] duration metric: took 746.363484ms to configureAuth
	I1212 20:28:52.749138  398903 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:28:52.749373  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:52.749480  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.767233  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:52.767548  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:52.767570  398903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:28:53.124031  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:28:53.124063  398903 machine.go:97] duration metric: took 1.650735569s to provisionDockerMachine
	I1212 20:28:53.124076  398903 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:28:53.124090  398903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:28:53.124184  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:28:53.124249  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.144150  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.248393  398903 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:28:53.251578  398903 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 20:28:53.251600  398903 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 20:28:53.251605  398903 command_runner.go:130] > VERSION_ID="12"
	I1212 20:28:53.251610  398903 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 20:28:53.251614  398903 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 20:28:53.251618  398903 command_runner.go:130] > ID=debian
	I1212 20:28:53.251623  398903 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 20:28:53.251629  398903 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 20:28:53.251634  398903 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 20:28:53.251713  398903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:28:53.251736  398903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:28:53.251748  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:28:53.251809  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:28:53.251889  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:28:53.251900  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:28:53.251976  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:28:53.251984  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> /etc/test/nested/copy/364853/hosts
	I1212 20:28:53.252026  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:28:53.259320  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:53.277130  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:28:53.294238  398903 start.go:296] duration metric: took 170.145848ms for postStartSetup
	I1212 20:28:53.294390  398903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:28:53.294470  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.312603  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.412930  398903 command_runner.go:130] > 11%
	I1212 20:28:53.413464  398903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:28:53.417828  398903 command_runner.go:130] > 174G
	I1212 20:28:53.418334  398903 fix.go:56] duration metric: took 1.976518079s for fixHost
	I1212 20:28:53.418383  398903 start.go:83] releasing machines lock for "functional-261311", held for 1.976583573s
	I1212 20:28:53.418465  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:53.435134  398903 ssh_runner.go:195] Run: cat /version.json
	I1212 20:28:53.435190  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.435445  398903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:28:53.435511  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.452987  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.462005  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.555880  398903 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 20:28:53.556060  398903 ssh_runner.go:195] Run: systemctl --version
	I1212 20:28:53.643428  398903 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:28:53.646219  398903 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 20:28:53.646272  398903 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 20:28:53.646362  398903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:28:53.685489  398903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:28:53.690919  398903 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:28:53.690960  398903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:28:53.691016  398903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:28:53.699790  398903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:28:53.699851  398903 start.go:496] detecting cgroup driver to use...
	I1212 20:28:53.699883  398903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:28:53.699937  398903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:28:53.716256  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:28:53.731380  398903 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:28:53.731442  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:28:53.747947  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:28:53.763704  398903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:28:53.877723  398903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:28:53.997385  398903 docker.go:234] disabling docker service ...
	I1212 20:28:53.997457  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:28:54.016313  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:28:54.032112  398903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:28:54.157667  398903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:28:54.273189  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:28:54.288211  398903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:28:54.301284  398903 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:28:54.302509  398903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:28:54.302613  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.311343  398903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:28:54.311460  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.320776  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.330058  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.340191  398903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:28:54.348326  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.357164  398903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.365464  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.374528  398903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:28:54.381778  398903 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 20:28:54.382795  398903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:28:54.390360  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:54.529224  398903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:28:54.703666  398903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:28:54.703740  398903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:28:54.707780  398903 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:28:54.707808  398903 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:28:54.707826  398903 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1212 20:28:54.707834  398903 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:54.707840  398903 command_runner.go:130] > Access: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707850  398903 command_runner.go:130] > Modify: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707858  398903 command_runner.go:130] > Change: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707861  398903 command_runner.go:130] >  Birth: -
	I1212 20:28:54.707934  398903 start.go:564] Will wait 60s for crictl version
	I1212 20:28:54.708017  398903 ssh_runner.go:195] Run: which crictl
	I1212 20:28:54.711729  398903 command_runner.go:130] > /usr/local/bin/crictl
	I1212 20:28:54.711909  398903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:28:54.737852  398903 command_runner.go:130] > Version:  0.1.0
	I1212 20:28:54.737888  398903 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:28:54.737895  398903 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 20:28:54.737901  398903 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:28:54.740042  398903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:28:54.740184  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.769676  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.769713  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.769720  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.769725  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.769750  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.769764  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.769768  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.769788  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.769802  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.769806  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.769810  398903 command_runner.go:130] >      static
	I1212 20:28:54.769813  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.769832  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.769838  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.769842  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.769849  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.769852  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.769859  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.769867  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.769872  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.769969  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.796781  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.796850  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.796873  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.796896  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.796933  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.796961  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.796982  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.797005  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.797036  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.797055  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.797071  398903 command_runner.go:130] >      static
	I1212 20:28:54.797089  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.797108  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.797151  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.797177  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.797197  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.797231  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.797262  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.797290  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.797309  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.804038  398903 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:28:54.806949  398903 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:28:54.823441  398903 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:28:54.827623  398903 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 20:28:54.827865  398903 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:28:54.827977  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:54.828031  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.860175  398903 command_runner.go:130] > {
	I1212 20:28:54.860197  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.860201  398903 command_runner.go:130] >     {
	I1212 20:28:54.860214  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.860219  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860225  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.860229  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860233  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860242  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.860250  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.860254  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860258  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.860263  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860270  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860274  398903 command_runner.go:130] >     },
	I1212 20:28:54.860277  398903 command_runner.go:130] >     {
	I1212 20:28:54.860285  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.860289  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860295  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.860298  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860302  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860310  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.860333  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.860341  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860346  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.860350  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860357  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860360  398903 command_runner.go:130] >     },
	I1212 20:28:54.860363  398903 command_runner.go:130] >     {
	I1212 20:28:54.860391  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.860396  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860401  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.860404  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860408  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860417  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.860425  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.860428  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860434  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.860439  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.860443  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860447  398903 command_runner.go:130] >     },
	I1212 20:28:54.860456  398903 command_runner.go:130] >     {
	I1212 20:28:54.860463  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.860467  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860472  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.860478  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860482  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860490  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.860497  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.860505  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860510  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.860513  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860517  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860521  398903 command_runner.go:130] >       },
	I1212 20:28:54.860530  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860534  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860540  398903 command_runner.go:130] >     },
	I1212 20:28:54.860546  398903 command_runner.go:130] >     {
	I1212 20:28:54.860552  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.860558  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860564  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.860567  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860577  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860594  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.860603  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.860610  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860614  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.860618  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860622  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860625  398903 command_runner.go:130] >       },
	I1212 20:28:54.860630  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860636  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860639  398903 command_runner.go:130] >     },
	I1212 20:28:54.860643  398903 command_runner.go:130] >     {
	I1212 20:28:54.860652  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.860659  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860665  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.860668  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860672  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860684  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.860695  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.860698  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860702  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.860706  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860711  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860717  398903 command_runner.go:130] >       },
	I1212 20:28:54.860721  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860726  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860739  398903 command_runner.go:130] >     },
	I1212 20:28:54.860747  398903 command_runner.go:130] >     {
	I1212 20:28:54.860754  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.860760  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860766  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.860769  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860773  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860781  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.860792  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.860796  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860801  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.860807  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860811  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860817  398903 command_runner.go:130] >     },
	I1212 20:28:54.860820  398903 command_runner.go:130] >     {
	I1212 20:28:54.860827  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.860831  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860839  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.860844  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860854  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860863  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.860876  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.860883  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860887  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.860891  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860895  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860905  398903 command_runner.go:130] >       },
	I1212 20:28:54.860908  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860912  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860922  398903 command_runner.go:130] >     },
	I1212 20:28:54.860925  398903 command_runner.go:130] >     {
	I1212 20:28:54.860932  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.860938  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860944  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.860948  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860953  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860961  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.860971  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.860975  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860979  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.860984  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860991  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.860994  398903 command_runner.go:130] >       },
	I1212 20:28:54.861000  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.861004  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.861014  398903 command_runner.go:130] >     }
	I1212 20:28:54.861017  398903 command_runner.go:130] >   ]
	I1212 20:28:54.861020  398903 command_runner.go:130] > }
	I1212 20:28:54.861204  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.861218  398903 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:28:54.861275  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.883482  398903 command_runner.go:130] > {
	I1212 20:28:54.883501  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.883506  398903 command_runner.go:130] >     {
	I1212 20:28:54.883514  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.883520  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883526  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.883529  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883533  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883547  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.883556  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.883560  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883564  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.883568  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883574  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883577  398903 command_runner.go:130] >     },
	I1212 20:28:54.883580  398903 command_runner.go:130] >     {
	I1212 20:28:54.883587  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.883591  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883597  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.883600  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883604  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883612  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.883620  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.883624  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883628  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.883632  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883638  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883641  398903 command_runner.go:130] >     },
	I1212 20:28:54.883645  398903 command_runner.go:130] >     {
	I1212 20:28:54.883652  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.883656  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883663  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.883666  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883670  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883679  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.883687  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.883690  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883695  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.883699  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.883702  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883706  398903 command_runner.go:130] >     },
	I1212 20:28:54.883712  398903 command_runner.go:130] >     {
	I1212 20:28:54.883719  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.883723  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883728  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.883733  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883737  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883745  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.883752  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.883756  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883759  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.883763  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883767  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883770  398903 command_runner.go:130] >       },
	I1212 20:28:54.883778  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883783  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883786  398903 command_runner.go:130] >     },
	I1212 20:28:54.883788  398903 command_runner.go:130] >     {
	I1212 20:28:54.883795  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.883798  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883804  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.883807  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883811  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883819  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.883827  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.883830  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883834  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.883838  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883842  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883845  398903 command_runner.go:130] >       },
	I1212 20:28:54.883854  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883858  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883861  398903 command_runner.go:130] >     },
	I1212 20:28:54.883864  398903 command_runner.go:130] >     {
	I1212 20:28:54.883874  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.883878  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883884  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.883888  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883891  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883899  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.883908  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.883911  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883915  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.883919  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883923  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883926  398903 command_runner.go:130] >       },
	I1212 20:28:54.883930  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883935  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883938  398903 command_runner.go:130] >     },
	I1212 20:28:54.883942  398903 command_runner.go:130] >     {
	I1212 20:28:54.883949  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.883952  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883958  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.883961  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883965  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883973  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.883981  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.883983  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883988  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.883991  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883995  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883999  398903 command_runner.go:130] >     },
	I1212 20:28:54.884002  398903 command_runner.go:130] >     {
	I1212 20:28:54.884008  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.884012  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884017  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.884020  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884030  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884038  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.884055  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.884061  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884064  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.884068  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884072  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.884075  398903 command_runner.go:130] >       },
	I1212 20:28:54.884079  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884082  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.884085  398903 command_runner.go:130] >     },
	I1212 20:28:54.884088  398903 command_runner.go:130] >     {
	I1212 20:28:54.884095  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.884099  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884103  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.884106  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884110  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884118  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.884125  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.884129  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884133  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.884137  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884141  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.884145  398903 command_runner.go:130] >       },
	I1212 20:28:54.884149  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884152  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.884155  398903 command_runner.go:130] >     }
	I1212 20:28:54.884158  398903 command_runner.go:130] >   ]
	I1212 20:28:54.884161  398903 command_runner.go:130] > }
	I1212 20:28:54.885632  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.885655  398903 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:28:54.885663  398903 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:28:54.885778  398903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:28:54.885868  398903 ssh_runner.go:195] Run: crio config
	I1212 20:28:54.934221  398903 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:28:54.934247  398903 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:28:54.934255  398903 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:28:54.934259  398903 command_runner.go:130] > #
	I1212 20:28:54.934288  398903 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:28:54.934303  398903 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:28:54.934310  398903 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:28:54.934320  398903 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:28:54.934324  398903 command_runner.go:130] > # reload'.
	I1212 20:28:54.934331  398903 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:28:54.934341  398903 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:28:54.934347  398903 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:28:54.934369  398903 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:28:54.934379  398903 command_runner.go:130] > [crio]
	I1212 20:28:54.934386  398903 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:28:54.934403  398903 command_runner.go:130] > # containers images, in this directory.
	I1212 20:28:54.934708  398903 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 20:28:54.934725  398903 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:28:54.935118  398903 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 20:28:54.935167  398903 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 20:28:54.935270  398903 command_runner.go:130] > # imagestore = ""
	I1212 20:28:54.935280  398903 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:28:54.935288  398903 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:28:54.935534  398903 command_runner.go:130] > # storage_driver = "overlay"
	I1212 20:28:54.935547  398903 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:28:54.935554  398903 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:28:54.935682  398903 command_runner.go:130] > # storage_option = [
	I1212 20:28:54.935790  398903 command_runner.go:130] > # ]
	I1212 20:28:54.935801  398903 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:28:54.935808  398903 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:28:54.935977  398903 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:28:54.935987  398903 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:28:54.936004  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:28:54.936009  398903 command_runner.go:130] > # always happen on a node reboot
	I1212 20:28:54.936228  398903 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:28:54.936250  398903 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:28:54.936257  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:28:54.936263  398903 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:28:54.936389  398903 command_runner.go:130] > # version_file_persist = ""
	I1212 20:28:54.936402  398903 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:28:54.936411  398903 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:28:54.937698  398903 command_runner.go:130] > # internal_wipe = true
	I1212 20:28:54.937721  398903 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 20:28:54.937728  398903 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 20:28:54.937860  398903 command_runner.go:130] > # internal_repair = true
	I1212 20:28:54.937871  398903 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:28:54.937878  398903 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:28:54.937885  398903 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:28:54.938097  398903 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:28:54.938132  398903 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:28:54.938152  398903 command_runner.go:130] > [crio.api]
	I1212 20:28:54.938172  398903 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:28:54.938284  398903 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:28:54.938314  398903 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:28:54.938521  398903 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:28:54.938555  398903 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:28:54.938577  398903 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:28:54.938680  398903 command_runner.go:130] > # stream_port = "0"
	I1212 20:28:54.938717  398903 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:28:54.938951  398903 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:28:54.938995  398903 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:28:54.939084  398903 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:28:54.939113  398903 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:28:54.939142  398903 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939249  398903 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:28:54.939291  398903 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:28:54.939312  398903 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939622  398903 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:28:54.939657  398903 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:28:54.939704  398903 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:28:54.939736  398903 command_runner.go:130] > # automatically pick up the changes.
	I1212 20:28:54.939811  398903 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:28:54.939858  398903 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940308  398903 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 20:28:54.940353  398903 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940776  398903 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 20:28:54.940788  398903 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:28:54.940801  398903 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:28:54.940806  398903 command_runner.go:130] > [crio.runtime]
	I1212 20:28:54.940824  398903 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:28:54.940830  398903 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:28:54.940834  398903 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:28:54.940840  398903 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:28:54.940969  398903 command_runner.go:130] > # default_ulimits = [
	I1212 20:28:54.941191  398903 command_runner.go:130] > # ]
	I1212 20:28:54.941204  398903 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:28:54.941558  398903 command_runner.go:130] > # no_pivot = false
	I1212 20:28:54.941568  398903 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:28:54.941575  398903 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:28:54.941945  398903 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:28:54.941956  398903 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:28:54.941961  398903 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:28:54.942013  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942279  398903 command_runner.go:130] > # conmon = ""
	I1212 20:28:54.942287  398903 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:28:54.942295  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:28:54.942500  398903 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:28:54.942511  398903 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:28:54.942545  398903 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:28:54.942582  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942706  398903 command_runner.go:130] > # conmon_env = [
	I1212 20:28:54.942961  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943022  398903 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:28:54.943043  398903 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:28:54.943084  398903 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:28:54.943203  398903 command_runner.go:130] > # default_env = [
	I1212 20:28:54.943456  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943514  398903 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:28:54.943537  398903 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 20:28:54.943931  398903 command_runner.go:130] > # selinux = false
	I1212 20:28:54.943943  398903 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:28:54.943997  398903 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 20:28:54.944007  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944219  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.944231  398903 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 20:28:54.944237  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944517  398903 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 20:28:54.944529  398903 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:28:54.944536  398903 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:28:54.944595  398903 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:28:54.944603  398903 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:28:54.944609  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944908  398903 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:28:54.944919  398903 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:28:54.944924  398903 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:28:54.945253  398903 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:28:54.945265  398903 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 20:28:54.945309  398903 command_runner.go:130] > # blockio parameters.
	I1212 20:28:54.945663  398903 command_runner.go:130] > # blockio_reload = false
	I1212 20:28:54.945676  398903 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:28:54.945725  398903 command_runner.go:130] > # irqbalance daemon.
	I1212 20:28:54.946100  398903 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:28:54.946111  398903 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 20:28:54.946174  398903 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 20:28:54.946186  398903 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 20:28:54.946547  398903 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 20:28:54.946561  398903 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:28:54.946567  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.946867  398903 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:28:54.946878  398903 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:28:54.947089  398903 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:28:54.947100  398903 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:28:54.947442  398903 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:28:54.947454  398903 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:28:54.947513  398903 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:28:54.947527  398903 command_runner.go:130] > # will be added.
	I1212 20:28:54.947601  398903 command_runner.go:130] > # default_capabilities = [
	I1212 20:28:54.947867  398903 command_runner.go:130] > # 	"CHOWN",
	I1212 20:28:54.948094  398903 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:28:54.948277  398903 command_runner.go:130] > # 	"FSETID",
	I1212 20:28:54.948500  398903 command_runner.go:130] > # 	"FOWNER",
	I1212 20:28:54.948701  398903 command_runner.go:130] > # 	"SETGID",
	I1212 20:28:54.948883  398903 command_runner.go:130] > # 	"SETUID",
	I1212 20:28:54.949109  398903 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:28:54.949307  398903 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:28:54.949502  398903 command_runner.go:130] > # 	"KILL",
	I1212 20:28:54.949671  398903 command_runner.go:130] > # ]
	I1212 20:28:54.949741  398903 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 20:28:54.949814  398903 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 20:28:54.950073  398903 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 20:28:54.950143  398903 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:28:54.950211  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.950289  398903 command_runner.go:130] > default_sysctls = [
	I1212 20:28:54.950330  398903 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 20:28:54.950370  398903 command_runner.go:130] > ]
	I1212 20:28:54.950439  398903 command_runner.go:130] > # List of devices on the host that a
	I1212 20:28:54.950465  398903 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:28:54.950518  398903 command_runner.go:130] > # allowed_devices = [
	I1212 20:28:54.950672  398903 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:28:54.950902  398903 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 20:28:54.951150  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951221  398903 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:28:54.951244  398903 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:28:54.951280  398903 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:28:54.951306  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.951324  398903 command_runner.go:130] > # additional_devices = [
	I1212 20:28:54.951343  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951424  398903 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:28:54.951503  398903 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:28:54.951521  398903 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:28:54.951592  398903 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:28:54.951609  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951651  398903 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:28:54.951672  398903 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:28:54.951689  398903 command_runner.go:130] > # Defaults to false.
	I1212 20:28:54.951751  398903 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:28:54.951809  398903 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:28:54.951879  398903 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:28:54.951906  398903 command_runner.go:130] > # hooks_dir = [
	I1212 20:28:54.951934  398903 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:28:54.951952  398903 command_runner.go:130] > # ]
	I1212 20:28:54.952010  398903 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:28:54.952049  398903 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:28:54.952097  398903 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:28:54.952138  398903 command_runner.go:130] > #
	I1212 20:28:54.952160  398903 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:28:54.952191  398903 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:28:54.952262  398903 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:28:54.952281  398903 command_runner.go:130] > #
	I1212 20:28:54.952324  398903 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:28:54.952346  398903 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:28:54.952404  398903 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:28:54.952491  398903 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:28:54.952529  398903 command_runner.go:130] > #
	I1212 20:28:54.952568  398903 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:28:54.952602  398903 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:28:54.952623  398903 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:28:54.952643  398903 command_runner.go:130] > # pids_limit = -1
	I1212 20:28:54.952677  398903 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:28:54.952708  398903 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:28:54.952837  398903 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:28:54.952892  398903 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:28:54.952911  398903 command_runner.go:130] > # log_size_max = -1
	I1212 20:28:54.952955  398903 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 20:28:54.953009  398903 command_runner.go:130] > # log_to_journald = false
	I1212 20:28:54.953062  398903 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:28:54.953088  398903 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:28:54.953123  398903 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:28:54.953149  398903 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:28:54.953170  398903 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:28:54.953206  398903 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:28:54.953299  398903 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:28:54.953339  398903 command_runner.go:130] > # read_only = false
	I1212 20:28:54.953359  398903 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:28:54.953395  398903 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:28:54.953418  398903 command_runner.go:130] > # live configuration reload.
	I1212 20:28:54.953436  398903 command_runner.go:130] > # log_level = "info"
	I1212 20:28:54.953472  398903 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:28:54.953562  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.953601  398903 command_runner.go:130] > # log_filter = ""
	I1212 20:28:54.953622  398903 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953643  398903 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:28:54.953675  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953712  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953763  398903 command_runner.go:130] > # uid_mappings = ""
	I1212 20:28:54.953804  398903 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953825  398903 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:28:54.953843  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953907  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953931  398903 command_runner.go:130] > # gid_mappings = ""
	I1212 20:28:54.953969  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:28:54.954021  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954062  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954085  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954103  398903 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:28:54.954162  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:28:54.954184  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954234  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954322  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954363  398903 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:28:54.954382  398903 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:28:54.954423  398903 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:28:54.954443  398903 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:28:54.954461  398903 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:28:54.954533  398903 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:28:54.954586  398903 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:28:54.954623  398903 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:28:54.954643  398903 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:28:54.954683  398903 command_runner.go:130] > # drop_infra_ctr = true
	I1212 20:28:54.954704  398903 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:28:54.954737  398903 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:28:54.954797  398903 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:28:54.954876  398903 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:28:54.954917  398903 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 20:28:54.954947  398903 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 20:28:54.954967  398903 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 20:28:54.955001  398903 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 20:28:54.955088  398903 command_runner.go:130] > # shared_cpuset = ""
	I1212 20:28:54.955124  398903 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:28:54.955160  398903 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:28:54.955179  398903 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:28:54.955201  398903 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:28:54.955242  398903 command_runner.go:130] > # pinns_path = ""
	I1212 20:28:54.955301  398903 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 20:28:54.955365  398903 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 20:28:54.955383  398903 command_runner.go:130] > # enable_criu_support = true
	I1212 20:28:54.955425  398903 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 20:28:54.955447  398903 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 20:28:54.955466  398903 command_runner.go:130] > # enable_pod_events = false
	I1212 20:28:54.955506  398903 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:28:54.955594  398903 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 20:28:54.955624  398903 command_runner.go:130] > # default_runtime = "crun"
	I1212 20:28:54.955661  398903 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:28:54.955697  398903 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:28:54.955721  398903 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 20:28:54.955790  398903 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:28:54.955868  398903 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:28:54.955891  398903 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:28:54.955927  398903 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:28:54.955946  398903 command_runner.go:130] > # ]
	I1212 20:28:54.955966  398903 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:28:54.956007  398903 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:28:54.956057  398903 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 20:28:54.956117  398903 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 20:28:54.956136  398903 command_runner.go:130] > #
	I1212 20:28:54.956299  398903 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 20:28:54.956391  398903 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 20:28:54.956423  398903 command_runner.go:130] > # runtime_type = "oci"
	I1212 20:28:54.956443  398903 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 20:28:54.956476  398903 command_runner.go:130] > # inherit_default_runtime = false
	I1212 20:28:54.956515  398903 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 20:28:54.956535  398903 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 20:28:54.956555  398903 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 20:28:54.956602  398903 command_runner.go:130] > # monitor_env = []
	I1212 20:28:54.956632  398903 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 20:28:54.956651  398903 command_runner.go:130] > # allowed_annotations = []
	I1212 20:28:54.956673  398903 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 20:28:54.956703  398903 command_runner.go:130] > # no_sync_log = false
	I1212 20:28:54.956730  398903 command_runner.go:130] > # default_annotations = {}
	I1212 20:28:54.956749  398903 command_runner.go:130] > # stream_websockets = false
	I1212 20:28:54.956770  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.956828  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.956858  398903 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 20:28:54.956879  398903 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 20:28:54.956902  398903 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:28:54.956934  398903 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:28:54.956956  398903 command_runner.go:130] > #   in $PATH.
	I1212 20:28:54.956979  398903 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 20:28:54.957012  398903 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:28:54.957045  398903 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 20:28:54.957066  398903 command_runner.go:130] > #   state.
	I1212 20:28:54.957088  398903 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:28:54.957122  398903 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:28:54.957146  398903 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 20:28:54.957169  398903 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 20:28:54.957202  398903 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 20:28:54.957227  398903 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:28:54.957250  398903 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:28:54.957281  398903 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:28:54.957305  398903 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:28:54.957327  398903 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:28:54.957359  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:28:54.957385  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:28:54.957408  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:28:54.957450  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:28:54.957471  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:28:54.957498  398903 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:28:54.957534  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 20:28:54.957557  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 20:28:54.957580  398903 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:28:54.957613  398903 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 20:28:54.957636  398903 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 20:28:54.957657  398903 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 20:28:54.957689  398903 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 20:28:54.957712  398903 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 20:28:54.957733  398903 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 20:28:54.957769  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 20:28:54.957795  398903 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 20:28:54.957816  398903 command_runner.go:130] > #   deprecated option "conmon".
	I1212 20:28:54.957848  398903 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 20:28:54.957870  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 20:28:54.957893  398903 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 20:28:54.957923  398903 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:28:54.957949  398903 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 20:28:54.957971  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 20:28:54.958007  398903 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 20:28:54.958030  398903 command_runner.go:130] > #   conmon-rs by using:
	I1212 20:28:54.958053  398903 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 20:28:54.958092  398903 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 20:28:54.958133  398903 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 20:28:54.958204  398903 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 20:28:54.958225  398903 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 20:28:54.958278  398903 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 20:28:54.958303  398903 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 20:28:54.958340  398903 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 20:28:54.958372  398903 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 20:28:54.958415  398903 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 20:28:54.958449  398903 command_runner.go:130] > #   when a machine crash happens.
	I1212 20:28:54.958472  398903 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 20:28:54.958496  398903 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 20:28:54.958530  398903 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 20:28:54.958560  398903 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 20:28:54.958583  398903 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 20:28:54.958606  398903 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 20:28:54.958635  398903 command_runner.go:130] > #
	I1212 20:28:54.958656  398903 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 20:28:54.958676  398903 command_runner.go:130] > #
	I1212 20:28:54.958708  398903 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 20:28:54.958738  398903 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 20:28:54.958756  398903 command_runner.go:130] > #
	I1212 20:28:54.958778  398903 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 20:28:54.958809  398903 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 20:28:54.958834  398903 command_runner.go:130] > #
	I1212 20:28:54.958854  398903 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 20:28:54.958874  398903 command_runner.go:130] > # feature.
	I1212 20:28:54.958903  398903 command_runner.go:130] > #
	I1212 20:28:54.958934  398903 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 20:28:54.958955  398903 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 20:28:54.958978  398903 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 20:28:54.959015  398903 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 20:28:54.959041  398903 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 20:28:54.959060  398903 command_runner.go:130] > #
	I1212 20:28:54.959092  398903 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 20:28:54.959116  398903 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 20:28:54.959135  398903 command_runner.go:130] > #
	I1212 20:28:54.959166  398903 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 20:28:54.959195  398903 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 20:28:54.959213  398903 command_runner.go:130] > #
	I1212 20:28:54.959234  398903 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 20:28:54.959264  398903 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 20:28:54.959290  398903 command_runner.go:130] > # limitation.
	I1212 20:28:54.959309  398903 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 20:28:54.959329  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 20:28:54.959363  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959390  398903 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 20:28:54.959409  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959429  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959460  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959486  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959503  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959521  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959541  398903 command_runner.go:130] > allowed_annotations = [
	I1212 20:28:54.959574  398903 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 20:28:54.959593  398903 command_runner.go:130] > ]
	I1212 20:28:54.959612  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959644  398903 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:28:54.959671  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 20:28:54.959688  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959705  398903 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:28:54.959727  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959762  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959780  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959800  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959819  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959855  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959872  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959894  398903 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:28:54.959924  398903 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:28:54.959953  398903 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:28:54.959976  398903 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:28:54.960002  398903 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 20:28:54.960047  398903 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 20:28:54.960072  398903 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 20:28:54.960106  398903 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:28:54.960135  398903 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:28:54.960156  398903 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:28:54.960176  398903 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:28:54.960207  398903 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:28:54.960236  398903 command_runner.go:130] > # Example:
	I1212 20:28:54.960257  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:28:54.960281  398903 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:28:54.960315  398903 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:28:54.960337  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:28:54.960356  398903 command_runner.go:130] > # cpuset = "0-1"
	I1212 20:28:54.960392  398903 command_runner.go:130] > # cpushares = "5"
	I1212 20:28:54.960413  398903 command_runner.go:130] > # cpuquota = "1000"
	I1212 20:28:54.960435  398903 command_runner.go:130] > # cpuperiod = "100000"
	I1212 20:28:54.960473  398903 command_runner.go:130] > # cpulimit = "35"
	I1212 20:28:54.960495  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.960507  398903 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:28:54.960516  398903 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:28:54.960522  398903 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:28:54.960542  398903 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:28:54.960555  398903 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:28:54.960563  398903 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:28:54.960568  398903 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 20:28:54.960575  398903 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 20:28:54.960579  398903 command_runner.go:130] > # Default value is set to true
	I1212 20:28:54.960595  398903 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 20:28:54.960602  398903 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 20:28:54.960613  398903 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 20:28:54.960618  398903 command_runner.go:130] > # Default value is set to 'false'
	I1212 20:28:54.960623  398903 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 20:28:54.960637  398903 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 20:28:54.960645  398903 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 20:28:54.960649  398903 command_runner.go:130] > # timezone = ""
	I1212 20:28:54.960656  398903 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:28:54.960661  398903 command_runner.go:130] > #
	I1212 20:28:54.960668  398903 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:28:54.960675  398903 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 20:28:54.960682  398903 command_runner.go:130] > [crio.image]
	I1212 20:28:54.960688  398903 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:28:54.960693  398903 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:28:54.960702  398903 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:28:54.960714  398903 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960719  398903 command_runner.go:130] > # global_auth_file = ""
	I1212 20:28:54.960724  398903 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:28:54.960730  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960738  398903 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.960745  398903 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:28:54.960758  398903 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960764  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960770  398903 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:28:54.960777  398903 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:28:54.960783  398903 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:28:54.960793  398903 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:28:54.960800  398903 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:28:54.960804  398903 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:28:54.960810  398903 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 20:28:54.960819  398903 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 20:28:54.960828  398903 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 20:28:54.960837  398903 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 20:28:54.960843  398903 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 20:28:54.960855  398903 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 20:28:54.960859  398903 command_runner.go:130] > # pinned_images = [
	I1212 20:28:54.960863  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960869  398903 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:28:54.960879  398903 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:28:54.960885  398903 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:28:54.960891  398903 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:28:54.960902  398903 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:28:54.960910  398903 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 20:28:54.960916  398903 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 20:28:54.960923  398903 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 20:28:54.960933  398903 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 20:28:54.960939  398903 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 20:28:54.960948  398903 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 20:28:54.960953  398903 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 20:28:54.960960  398903 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:28:54.960969  398903 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:28:54.960973  398903 command_runner.go:130] > # changing them here.
	I1212 20:28:54.960979  398903 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 20:28:54.960983  398903 command_runner.go:130] > # insecure_registries = [
	I1212 20:28:54.960986  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960995  398903 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:28:54.961006  398903 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:28:54.961012  398903 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:28:54.961020  398903 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:28:54.961026  398903 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:28:54.961032  398903 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 20:28:54.961042  398903 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 20:28:54.961046  398903 command_runner.go:130] > # auto_reload_registries = false
	I1212 20:28:54.961054  398903 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 20:28:54.961062  398903 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 20:28:54.961069  398903 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 20:28:54.961077  398903 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 20:28:54.961082  398903 command_runner.go:130] > # The mode of short name resolution.
	I1212 20:28:54.961089  398903 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 20:28:54.961100  398903 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 20:28:54.961105  398903 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 20:28:54.961112  398903 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 20:28:54.961118  398903 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 20:28:54.961124  398903 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 20:28:54.961132  398903 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 20:28:54.961138  398903 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:28:54.961142  398903 command_runner.go:130] > # CNI plugins.
	I1212 20:28:54.961146  398903 command_runner.go:130] > [crio.network]
	I1212 20:28:54.961152  398903 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:28:54.961159  398903 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:28:54.961164  398903 command_runner.go:130] > # cni_default_network = ""
	I1212 20:28:54.961171  398903 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:28:54.961179  398903 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:28:54.961185  398903 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:28:54.961189  398903 command_runner.go:130] > # plugin_dirs = [
	I1212 20:28:54.961195  398903 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:28:54.961198  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961209  398903 command_runner.go:130] > # List of included pod metrics.
	I1212 20:28:54.961213  398903 command_runner.go:130] > # included_pod_metrics = [
	I1212 20:28:54.961217  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961224  398903 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:28:54.961228  398903 command_runner.go:130] > [crio.metrics]
	I1212 20:28:54.961234  398903 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:28:54.961243  398903 command_runner.go:130] > # enable_metrics = false
	I1212 20:28:54.961248  398903 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:28:54.961253  398903 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:28:54.961262  398903 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:28:54.961271  398903 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:28:54.961280  398903 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:28:54.961285  398903 command_runner.go:130] > # metrics_collectors = [
	I1212 20:28:54.961291  398903 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:28:54.961296  398903 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 20:28:54.961302  398903 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:28:54.961306  398903 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:28:54.961311  398903 command_runner.go:130] > # 	"operations_total",
	I1212 20:28:54.961315  398903 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:28:54.961320  398903 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:28:54.961324  398903 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:28:54.961328  398903 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:28:54.961333  398903 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:28:54.961338  398903 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:28:54.961342  398903 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:28:54.961346  398903 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:28:54.961351  398903 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:28:54.961358  398903 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 20:28:54.961363  398903 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 20:28:54.961374  398903 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 20:28:54.961377  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961383  398903 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 20:28:54.961389  398903 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 20:28:54.961394  398903 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:28:54.961398  398903 command_runner.go:130] > # metrics_port = 9090
	I1212 20:28:54.961404  398903 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:28:54.961409  398903 command_runner.go:130] > # metrics_socket = ""
	I1212 20:28:54.961420  398903 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:28:54.961429  398903 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:28:54.961440  398903 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:28:54.961445  398903 command_runner.go:130] > # certificate on any modification event.
	I1212 20:28:54.961452  398903 command_runner.go:130] > # metrics_cert = ""
	I1212 20:28:54.961458  398903 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:28:54.961464  398903 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:28:54.961470  398903 command_runner.go:130] > # metrics_key = ""
	I1212 20:28:54.961476  398903 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:28:54.961480  398903 command_runner.go:130] > [crio.tracing]
	I1212 20:28:54.961487  398903 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:28:54.961491  398903 command_runner.go:130] > # enable_tracing = false
	I1212 20:28:54.961499  398903 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:28:54.961504  398903 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 20:28:54.961513  398903 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 20:28:54.961520  398903 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:28:54.961527  398903 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 20:28:54.961530  398903 command_runner.go:130] > [crio.nri]
	I1212 20:28:54.961534  398903 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 20:28:54.961544  398903 command_runner.go:130] > # enable_nri = true
	I1212 20:28:54.961548  398903 command_runner.go:130] > # NRI socket to listen on.
	I1212 20:28:54.961553  398903 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 20:28:54.961559  398903 command_runner.go:130] > # NRI plugin directory to use.
	I1212 20:28:54.961564  398903 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 20:28:54.961569  398903 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 20:28:54.961574  398903 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 20:28:54.961579  398903 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 20:28:54.961660  398903 command_runner.go:130] > # nri_disable_connections = false
	I1212 20:28:54.961672  398903 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 20:28:54.961678  398903 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 20:28:54.961683  398903 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 20:28:54.961689  398903 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 20:28:54.961696  398903 command_runner.go:130] > # NRI default validator configuration.
	I1212 20:28:54.961703  398903 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 20:28:54.961717  398903 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 20:28:54.961722  398903 command_runner.go:130] > # can be restricted/rejected:
	I1212 20:28:54.961728  398903 command_runner.go:130] > # - OCI hook injection
	I1212 20:28:54.961734  398903 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 20:28:54.961740  398903 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 20:28:54.961747  398903 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 20:28:54.961752  398903 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 20:28:54.961759  398903 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 20:28:54.961766  398903 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 20:28:54.961775  398903 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 20:28:54.961779  398903 command_runner.go:130] > #
	I1212 20:28:54.961783  398903 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 20:28:54.961791  398903 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 20:28:54.961796  398903 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 20:28:54.961802  398903 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 20:28:54.961810  398903 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 20:28:54.961815  398903 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 20:28:54.961821  398903 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 20:28:54.961828  398903 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 20:28:54.961831  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961838  398903 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 20:28:54.961845  398903 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:28:54.961851  398903 command_runner.go:130] > [crio.stats]
	I1212 20:28:54.961860  398903 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:28:54.961866  398903 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:28:54.961872  398903 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:28:54.961879  398903 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 20:28:54.961889  398903 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 20:28:54.961894  398903 command_runner.go:130] > # collection_period = 0
	I1212 20:28:54.961945  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912485774Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 20:28:54.961961  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912523214Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 20:28:54.961978  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912551908Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 20:28:54.961989  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912577237Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 20:28:54.962000  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912661332Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.962016  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912929282Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 20:28:54.962028  398903 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:28:54.962158  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:54.962172  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:54.962187  398903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:28:54.962211  398903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:28:54.962351  398903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:28:54.962430  398903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:28:54.969281  398903 command_runner.go:130] > kubeadm
	I1212 20:28:54.969300  398903 command_runner.go:130] > kubectl
	I1212 20:28:54.969304  398903 command_runner.go:130] > kubelet
	I1212 20:28:54.970141  398903 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:28:54.970208  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:28:54.977797  398903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:28:54.990948  398903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:28:55.010887  398903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 20:28:55.035195  398903 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:28:55.039688  398903 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 20:28:55.039770  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.162925  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:55.180455  398903 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:28:55.180486  398903 certs.go:195] generating shared ca certs ...
	I1212 20:28:55.180503  398903 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.180666  398903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:28:55.180714  398903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:28:55.180726  398903 certs.go:257] generating profile certs ...
	I1212 20:28:55.180830  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:28:55.180895  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:28:55.180950  398903 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:28:55.180963  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:28:55.180976  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:28:55.180993  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:28:55.181015  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:28:55.181034  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:28:55.181047  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:28:55.181062  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:28:55.181077  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:28:55.181130  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:28:55.181167  398903 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:28:55.181180  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:28:55.181208  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:28:55.181238  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:28:55.181263  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:28:55.181322  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:55.181358  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.181374  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.181387  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.181918  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:28:55.205330  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:28:55.228282  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:28:55.247851  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:28:55.266269  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:28:55.284183  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:28:55.302120  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:28:55.319891  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:28:55.338073  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:28:55.356708  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:28:55.374821  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:28:55.392459  398903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:28:55.405239  398903 ssh_runner.go:195] Run: openssl version
	I1212 20:28:55.411334  398903 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 20:28:55.411437  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.418985  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:28:55.426485  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430183  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430452  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430510  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.471108  398903 command_runner.go:130] > b5213941
	I1212 20:28:55.471637  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:28:55.479292  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.486905  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:28:55.494608  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498479  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498582  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498669  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.541933  398903 command_runner.go:130] > 51391683
	I1212 20:28:55.542454  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:28:55.550083  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.558343  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:28:55.567964  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571832  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571862  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571932  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.617329  398903 command_runner.go:130] > 3ec20f2e
	I1212 20:28:55.617911  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:28:55.625593  398903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629390  398903 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629419  398903 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 20:28:55.629427  398903 command_runner.go:130] > Device: 259,1	Inode: 1315224     Links: 1
	I1212 20:28:55.629433  398903 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:55.629439  398903 command_runner.go:130] > Access: 2025-12-12 20:24:47.845478497 +0000
	I1212 20:28:55.629445  398903 command_runner.go:130] > Modify: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629449  398903 command_runner.go:130] > Change: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629454  398903 command_runner.go:130] >  Birth: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629525  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:28:55.669986  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.670463  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:28:55.711204  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.711650  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:28:55.751880  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.752298  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:28:55.793260  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.793349  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:28:55.836082  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.836162  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:28:55.878637  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.879114  398903 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:55.879241  398903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:28:55.879321  398903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:28:55.906646  398903 cri.go:89] found id: ""
	I1212 20:28:55.906721  398903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:28:55.913746  398903 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 20:28:55.913771  398903 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 20:28:55.913778  398903 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 20:28:55.914790  398903 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:28:55.914807  398903 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:28:55.914874  398903 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:28:55.922292  398903 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:28:55.922687  398903 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-261311" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.922785  398903 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "functional-261311" cluster setting kubeconfig missing "functional-261311" context setting]
	I1212 20:28:55.923055  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.923461  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.923610  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.924164  398903 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:28:55.924185  398903 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:28:55.924192  398903 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:28:55.924198  398903 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:28:55.924202  398903 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:28:55.924512  398903 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:28:55.924617  398903 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:28:55.932459  398903 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:28:55.932497  398903 kubeadm.go:602] duration metric: took 17.683266ms to restartPrimaryControlPlane
	I1212 20:28:55.932527  398903 kubeadm.go:403] duration metric: took 53.402973ms to StartCluster
	I1212 20:28:55.932549  398903 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.932634  398903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.933272  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.933478  398903 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:28:55.933879  398903 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:28:55.933961  398903 addons.go:70] Setting storage-provisioner=true in profile "functional-261311"
	I1212 20:28:55.933975  398903 addons.go:239] Setting addon storage-provisioner=true in "functional-261311"
	I1212 20:28:55.933999  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.933941  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:55.934065  398903 addons.go:70] Setting default-storageclass=true in profile "functional-261311"
	I1212 20:28:55.934077  398903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-261311"
	I1212 20:28:55.934349  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.934437  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.939847  398903 out.go:179] * Verifying Kubernetes components...
	I1212 20:28:55.942718  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.970904  398903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:28:55.971648  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.971825  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.972098  398903 addons.go:239] Setting addon default-storageclass=true in "functional-261311"
	I1212 20:28:55.972128  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.972592  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.974802  398903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:55.974826  398903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:28:55.974884  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.016147  398903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.016169  398903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:28:56.016234  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.029989  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.052293  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.147892  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:56.182806  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.199875  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:56.957368  398903 node_ready.go:35] waiting up to 6m0s for node "functional-261311" to be "Ready" ...
	I1212 20:28:56.957463  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957488  398903 type.go:168] "Request Body" body=""
	I1212 20:28:56.957545  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1212 20:28:56.957546  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957630  398903 retry.go:31] will retry after 313.594755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957713  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:56.957754  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957788  398903 retry.go:31] will retry after 317.565464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.272396  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.275890  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.344322  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.344435  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.344471  398903 retry.go:31] will retry after 221.297028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351139  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.351181  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351200  398903 retry.go:31] will retry after 309.802672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.458417  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.458511  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.566100  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.625592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.625687  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.625728  398903 retry.go:31] will retry after 499.665469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.661822  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.729487  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.729527  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.729550  398903 retry.go:31] will retry after 503.664724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.958134  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.958421  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.126013  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:58.197757  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.197828  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.197853  398903 retry.go:31] will retry after 1.10540153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.234015  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:58.297441  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.297548  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.297576  398903 retry.go:31] will retry after 1.092264057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:28:58.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:28:59.303542  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:59.364708  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.364773  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.364796  398903 retry.go:31] will retry after 1.503349263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.390910  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:59.449881  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.449970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.450009  398903 retry.go:31] will retry after 1.024940216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.457981  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.458049  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.458335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:59.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.957671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.957942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.457683  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.475497  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:00.543993  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.544048  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.544072  398903 retry.go:31] will retry after 2.24833219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.868438  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:00.926476  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.930138  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.930173  398903 retry.go:31] will retry after 1.556562441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.958315  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.958392  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:00.958787  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:01.458585  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.458668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.458995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:01.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.958122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.457889  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.457969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.458299  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.487755  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:02.545597  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.549667  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.549705  398903 retry.go:31] will retry after 1.726891228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.793114  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:02.856403  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.860058  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.860101  398903 retry.go:31] will retry after 3.686133541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.958383  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.958453  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.958724  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:03.458506  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.458589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.458945  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:03.459000  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:03.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.958210  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.277666  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:04.331675  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:04.335668  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.335700  398903 retry.go:31] will retry after 4.014847664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.457944  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.458019  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.458285  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.457751  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.457828  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.958009  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.958416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:05.958469  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:06.458265  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:06.546991  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:06.607592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:06.607644  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.607664  398903 retry.go:31] will retry after 4.884355554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.958195  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.958538  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.458326  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.458394  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.458746  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.958480  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.958781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:07.958832  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:08.351452  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:08.404529  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:08.407970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.408008  398903 retry.go:31] will retry after 4.723006947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.458208  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.458304  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:08.958349  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.958418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.458637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.458962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.957658  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.958100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:10.458537  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.458602  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.458869  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:10.458910  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:10.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.458416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.492814  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:11.557889  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:11.557940  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.557960  398903 retry.go:31] will retry after 4.177574733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.958412  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.958494  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.958766  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:12.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.458627  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.458916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:12.458972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:12.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.958047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.131713  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:13.192350  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:13.192414  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.192433  398903 retry.go:31] will retry after 8.846505763s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.957726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.457780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.457878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.458172  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.957968  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.958296  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:14.958356  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:15.457665  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.457745  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.458081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:15.737088  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:15.794323  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:15.794363  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.794386  398903 retry.go:31] will retry after 13.823463892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.958001  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.958077  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.958395  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.458178  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.458264  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.458517  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.958364  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:16.958807  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:17.458384  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.458800  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:17.958573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.958679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.958934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:19.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:19.458044  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:19.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.457635  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.458035  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.957568  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.957646  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:21.457974  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.458051  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.458401  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:21.458459  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:21.958216  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.958620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.040027  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:22.098166  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:22.102301  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.102333  398903 retry.go:31] will retry after 9.311877294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.458542  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.458608  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.458864  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.957965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.957780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.957869  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.958143  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:23.958184  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:24.457666  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.457740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:24.957754  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.957831  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.457956  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.958502  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.958583  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:25.958993  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:26.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.458131  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:26.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.957860  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.958177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.457614  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.457693  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.957616  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:28.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.458119  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:28.458170  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:28.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.957713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.457661  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.458113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.618498  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:29.673247  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:29.677091  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.677126  398903 retry.go:31] will retry after 12.247484069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.958487  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.958556  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.958828  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.957764  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:30.958221  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:31.415106  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:31.457708  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.457795  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:31.477657  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:31.481452  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.481486  398903 retry.go:31] will retry after 29.999837192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.958329  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.958678  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.458335  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.458415  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.958367  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.958440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.958702  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:32.958743  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:33.458498  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.458574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.458942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:33.957518  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.957939  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.457617  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.457695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.957613  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:35.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.458075  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:35.458135  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:35.957713  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.457989  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.458070  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.458457  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.958268  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.958361  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.958681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:37.458419  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.458489  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.458760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:37.458803  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:37.958548  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.958989  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.457703  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.457783  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.458130  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.957909  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:39.958142  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:40.458512  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.458875  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:40.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.957663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.957999  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.458005  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.458079  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.458415  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.924900  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:41.958510  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.958584  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.958850  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:41.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:42.001052  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:42.001094  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.001115  398903 retry.go:31] will retry after 30.772279059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.457672  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.457755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.458082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:42.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.458540  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.458610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.458870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.957586  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.958032  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:44.457633  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.457707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.458045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:44.458100  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:44.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.958170  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.457726  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.458152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.957997  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.958445  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:46.458286  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.458355  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.458622  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:46.458663  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:46.958455  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.958553  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.958947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.457794  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.457932  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.458463  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.958292  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.958370  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:48.458483  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.458899  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:48.458971  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:48.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.958090  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.457649  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.457920  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.957681  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.958050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.457756  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.457838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.458163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.957983  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:50.958033  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:51.457978  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.458054  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.458398  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:51.958201  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.958282  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.958598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.458345  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.458418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.958540  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.958883  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:52.958945  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:53.457615  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.457698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:53.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.957674  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.957892  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.958225  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:55.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.457654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.457934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:55.457987  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:55.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.958319  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.458108  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.458185  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.458525  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.958317  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.958572  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:57.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:57.458880  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:57.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.957685  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.457591  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.457943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.957737  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.958104  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.457826  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.457924  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.458273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.958054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:59.958118  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:00.457778  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.457870  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:00.958235  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.958755  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.460861  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.460950  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.461277  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.481640  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:01.559465  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:01.559521  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.559544  398903 retry.go:31] will retry after 33.36515596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.958099  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.958188  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:01.958533  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:02.458305  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.458381  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.458719  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:02.958386  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.958745  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.457694  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:04.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.458056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:04.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:04.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.958103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.457691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.457777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.458124  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.958166  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.958257  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.958561  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:06.458375  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.458451  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.458788  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:06.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:06.957529  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.957955  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.457552  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.457657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.957700  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.957780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.457728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.458065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.957730  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:08.958162  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:09.457851  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.457929  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.458309  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:09.958049  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.958147  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.958566  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.458707  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.958517  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.958916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:10.958976  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:11.457913  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.458009  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.458358  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:11.958078  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.958148  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.958429  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.458295  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.458371  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.458726  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.774318  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:12.840421  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:12.840464  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.840483  398903 retry.go:31] will retry after 30.011296842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.957679  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.957756  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:13.457610  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:13.457978  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:13.957691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.957779  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.958199  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.457821  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.458184  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.958021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:15.457670  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.458088  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:15.458148  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:15.958126  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.958215  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.958644  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.458429  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.458692  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.958433  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.958508  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.958865  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:17.458563  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.458662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.459072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:17.459137  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:17.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.957765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.957740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.958158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.457570  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.457653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.957747  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:19.958157  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:20.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.458135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:20.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.957690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.958023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.458249  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.458570  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.958397  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.958474  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.958860  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:21.958919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:22.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.457650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.457962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:22.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.957818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.958168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:24.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:24.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:24.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.957748  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.958123  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.457534  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.457604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.457872  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.958565  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.958637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.958933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:26.457975  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.458048  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.458392  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:26.458450  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:26.957925  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.957996  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.958288  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.457662  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.458086  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.957807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.957887  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.958218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.957686  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.957778  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.958129  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:28.958185  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:29.457860  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.457948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.458268  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:29.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.957934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.457654  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.957859  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:30.958301  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:31.458270  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.458363  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.458639  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:31.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.958925  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.457675  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.957526  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.957599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.957876  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:33.457638  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:33.458151  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:33.957835  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.957912  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.457709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.458076  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.925852  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:34.958350  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.958426  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.958704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.987024  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990602  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990708  398903 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:35.458275  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.458681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:35.458739  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:35.958407  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.958762  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.457712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.458038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.457790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.957761  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.958213  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:37.958272  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:38.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.458016  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:38.958134  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.958210  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.958478  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.458248  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.458336  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.458729  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.958456  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.958539  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.958888  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:39.958942  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:40.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.457648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.457967  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:40.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.958059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.958252  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.958327  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.958608  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:42.458416  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.458492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.458825  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:42.458889  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:42.852572  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:42.917565  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921658  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921759  398903 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:42.924799  398903 out.go:179] * Enabled addons: 
	I1212 20:30:42.926930  398903 addons.go:530] duration metric: took 1m46.993054127s for enable addons: enabled=[]
	I1212 20:30:42.957819  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.957896  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.958219  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.457528  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.457600  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.458022  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.957587  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.957941  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:44.957982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:45.457697  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.457796  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.458121  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:45.958191  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.958612  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.458444  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.458532  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.957599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.958064  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:46.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:47.457807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.458266  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:47.957963  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.958044  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.958323  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.457878  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.457954  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.458353  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.957937  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.958025  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.958407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:48.958465  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:49.458150  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.458217  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.458483  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:49.958339  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.958422  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.958782  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.457522  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.457619  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:51.457956  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.458033  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.458372  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:51.458436  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:51.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.958760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.458531  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.458606  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.458887  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.957701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.457803  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.457880  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.458232  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.957948  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.958039  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.958314  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:53.958357  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:54.458007  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.458120  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.458562  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:54.957657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.957767  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.958125  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.457599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.457671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.958592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:55.959020  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:56.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.457702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:56.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.957655  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.957949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.457710  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.458063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.958430  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.958528  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.958868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:58.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:58.458062  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:58.957718  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.958154  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.457651  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.957798  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.957888  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.958201  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:00.457692  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.457780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:00.458250  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:00.957940  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.958024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.458223  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.458299  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.458574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.958306  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.958388  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.958736  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:02.458565  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.458645  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.459016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:02.459076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:02.957720  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.457664  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.957853  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.957937  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.958274  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.457595  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.458030  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.957597  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:04.958098  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:05.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.457701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:05.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.957863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.958194  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.458145  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.458228  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.958415  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.958493  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.958820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:06.958879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:07.457506  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.457575  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.457849  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:07.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.957714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.457776  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.457879  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.458223  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.957577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.957652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:09.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.457705  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:09.458076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:09.957794  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.957907  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.958279  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.457971  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.458382  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.958220  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.958714  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:11.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:11.458138  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:11.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.458031  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.957743  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.957841  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:13.458376  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.458443  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.458763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:13.458818  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:13.958577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.958652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.958977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.458101  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.957799  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.957875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.958197  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.457653  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.458080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.958204  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.958537  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:15.958599  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:16.458429  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.458501  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:16.957534  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.957617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.957998  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.457728  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.457806  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.458115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.957591  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:18.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.457847  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.458133  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:18.458180  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:18.957696  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.457727  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.458140  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.957742  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.457686  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.957650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.957923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:20.957972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:21.457915  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.457990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.458320  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:21.958165  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.958276  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.958607  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.458365  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.458716  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.958558  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.958659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.959007  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:22.959071  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:23.457766  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:23.957896  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.957969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.958315  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.457613  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.457714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.958115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:25.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:25.458017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:25.958041  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.958123  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.958512  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.458319  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.458398  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.958549  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.958846  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:27.457587  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.457677  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.457993  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:27.458047  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:27.957637  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.457523  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.457597  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.957667  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.957755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:29.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.458112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:29.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:29.957515  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.957590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.957922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.458057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.957854  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:31.458036  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.458104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:31.458409  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:31.958181  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.958643  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.458473  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.458949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.958012  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.457738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.957824  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.957905  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.958247  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:33.958303  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:34.458003  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.458078  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.458409  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:34.958240  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.958349  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.458572  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.458682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.459077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.958480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.958555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.958847  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:35.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:36.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.458167  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:36.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.957948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.958275  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.457594  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.958057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:38.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:38.458189  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:38.957510  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.957592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.957862  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.457578  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.457664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.957715  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.958106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.457964  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.958114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:40.958173  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:41.457926  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.458028  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.458354  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:41.958180  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.958256  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.958548  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.458349  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.458439  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.458833  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.958514  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.958594  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.958932  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:42.958992  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:43.457618  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.458058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:43.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.958071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.457779  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.457857  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.458177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.957657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:45.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.458010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:45.458070  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:45.957784  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.957877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.458071  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.458414  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.958212  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.958295  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.958642  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:47.458480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.458558  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.458926  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:47.458982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:47.957584  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.957658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.957921  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.457764  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.458171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.957862  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.957972  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.958326  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.458004  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.458083  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.458381  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.958209  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.958290  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.958636  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:49.958695  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:50.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.458818  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:50.957496  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.957563  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.458084  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.957648  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:52.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.457781  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.458111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:52.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:52.957662  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.957750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.457800  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.457898  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.458256  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.958171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:54.958225  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:55.457602  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.457942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:55.957857  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.957935  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.458155  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.458540  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.958285  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.958359  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.958625  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:56.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:57.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.458823  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:57.958474  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.958559  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.457647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.457965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:59.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:59.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:59.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.957976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.457722  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.457811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.458158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.958017  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.958101  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:01.458294  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.458366  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.458700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:01.458759  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:01.958578  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.958660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.959010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.957736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.958135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:03.958124  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:04.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.457689  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:04.957738  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.957816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.457928  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.458292  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.958124  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.958202  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.958466  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:05.958511  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:06.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.458469  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.458820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:06.957560  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.958040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.457620  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.457897  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.957602  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:08.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.458006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:08.458064  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:08.958540  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.958617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.958908  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.457660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.458015  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.958016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.457990  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.958058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:10.958119  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:11.458077  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.458157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:11.958236  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.958308  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.958586  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.458497  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.458856  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.957638  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:13.460759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.460830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.461068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:13.461109  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:13.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.957849  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.958216  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.957890  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.957960  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.958230  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.458122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.957985  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.958378  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:15.958434  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:16.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.458504  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:16.958300  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.958386  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.958758  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.458639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.458986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.957715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.958109  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:18.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.458061  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:18.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:18.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.457938  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.957777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.958136  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.458047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.957741  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.957811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:20.958125  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:21.458048  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.458126  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.458473  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:21.958279  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.458484  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.458765  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.958550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:22.959017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:23.457629  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:23.957725  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.957800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.958134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:25.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:25.458090  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:25.958111  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.958187  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.958536  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.458306  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.458383  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.458747  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.958505  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.958576  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.958841  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:27.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:27.458127  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:27.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.957874  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.958233  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.457931  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.457998  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.458263  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.957554  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.957977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.457711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.957530  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.957906  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:29.957953  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:30.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.458040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:30.957778  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.458073  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.458140  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.458418  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.958203  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.958278  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.958617  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:31.958671  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:32.458448  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.458537  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.458868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:32.957533  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.957933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.458036  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:34.457588  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.457997  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:34.458054  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:34.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.957770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:36.458166  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.458243  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.458598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:36.458654  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:36.958444  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.958533  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.958889  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.458453  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.458552  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.458884  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.957686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.457739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.957536  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.957905  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:38.957951  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:39.457634  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:39.957793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.957878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.458558  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.458626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.458896  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:40.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:41.457917  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.458003  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.458345  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:41.958008  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.958090  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.958391  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.458186  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.458268  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.458645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.958471  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.958551  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.958913  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:42.958969  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:43.457567  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.457639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.457970  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:43.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.958127  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.457848  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.457925  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.458300  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.957921  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.957989  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.958269  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:45.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:45.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:45.957919  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.458249  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.958392  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.958479  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.457637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.457976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.957652  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.957996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:47.958035  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:48.457660  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.458085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:48.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.958068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.457759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.458095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.957718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:49.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:50.457791  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.457875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.458204  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:50.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.957654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.457942  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.458024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.958377  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.958463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.958946  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:51.959008  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:52.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:52.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.457745  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.457818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.458155  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.958157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.958497  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:54.458351  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.458785  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:54.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:54.957837  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.957927  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.958377  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.458049  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.958082  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.958157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.958506  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.458323  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.458789  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.958570  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.958641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.958907  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:56.958949  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:57.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:57.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.457771  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.458182  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.957910  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.957990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.958333  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:59.458167  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.458246  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.458600  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:59.458673  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:59.958419  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.958763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.458626  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.458718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.459178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.957917  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.957999  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.958339  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.458146  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.458227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.458496  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.958324  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:01.958746  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:02.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.458595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.458922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:02.957588  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.457658  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.957689  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.957766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:04.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:04.458057  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:04.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.958097  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.957795  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.957876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:06.458126  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.458201  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.458609  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:06.458666  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:06.958431  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.958510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.958861  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.458432  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.458505  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.958549  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.958631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.958975  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.457744  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.458100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.957714  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.957786  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:08.958096  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:09.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.458145  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:09.957623  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.957707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.457729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.458029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:10.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:11.457959  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.458036  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.458394  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:11.958170  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.958549  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.458358  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.458775  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.957520  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.957604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.957972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:13.458501  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.458572  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.458848  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:13.458891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:13.957574  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.457577  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.457656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.957521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.957928  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.457515  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.457593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.957742  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.957819  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:15.958212  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:16.457912  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.458249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:16.957938  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.958013  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.958371  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.458356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.957551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.957895  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:18.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:18.458060  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:18.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.457757  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.457827  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:20.457628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.458050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:20.458103  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:20.957580  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.457718  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.457793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.458138  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.957933  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.958282  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:22.457957  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.458031  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.458362  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:22.458419  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:22.958162  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.958237  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.958574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.458385  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.458462  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.958452  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.958525  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.958802  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:24.458538  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.458623  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.458972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:24.459028  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:24.957567  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.957987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.957886  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.957967  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.958322  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.458268  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.958389  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.958460  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.958721  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:26.958761  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:27.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.458621  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.458969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:27.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.957682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.958006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.457642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.457915  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.957711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:29.457799  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.457877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.458218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:29.458292  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:29.957566  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.957640  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.957986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.457705  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.457788  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.957840  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.957922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.958258  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:31.458070  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.458149  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.458407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:31.458480  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:31.958244  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.958322  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.958670  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.458475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.458902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.958550  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.457551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.457948  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:33.958117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:34.457524  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.457599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.457902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:34.957627  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.957704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.958079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.457914  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.458250  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.958142  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.958225  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.958508  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:35.958562  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:36.458394  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.458478  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.458822  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:36.957589  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.457586  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.958113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:38.457820  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.458236  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:38.458295  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:38.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.957699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.958001  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.457722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.958083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.457768  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.457840  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.458168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.957758  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:40.958231  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:41.458222  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.458298  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.458630  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:41.958341  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.958427  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.958700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.458591  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.458943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:43.457746  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.457813  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.458089  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:43.458129  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:43.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.957883  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.457980  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.458055  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.458393  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.958151  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.958223  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:45.458269  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.458343  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.458708  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:45.458764  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:45.958513  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.958931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.457565  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.457633  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.957631  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.958128  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.457922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.458245  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.957545  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.957618  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:47.957963  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:48.457643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:48.957629  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.457729  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.458103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.957633  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:49.958114  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:50.457640  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:50.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.458156  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.458244  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.458588  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.958840  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:51.958897  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:52.458422  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.458781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:52.958521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.958596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.958935  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.457641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.457994  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.957675  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.957749  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.958046  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:54.457737  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.457815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.458164  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:54.458229  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:54.957758  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.958073  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.958151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.958481  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:56.458356  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.458518  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.458867  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:56.458919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:56.958475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.958546  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.958806  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.457573  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.457662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.957708  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.958149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.457519  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.457596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.957618  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.957702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:58.958086  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:59.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.457717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.458079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:59.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.957695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.958025  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.457770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.458220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.957723  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.957815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.958152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:00.958209  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:01.458053  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.458124  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.458397  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:01.958241  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.958318  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.458431  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.458517  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.458903  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.958593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.958871  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:02.958913  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:03.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.457665  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.458014  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:03.957750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.958178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.457755  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.458106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.957792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.957872  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.958222  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:05.457932  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.458011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.458316  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:05.458363  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:05.958224  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.958347  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.958674  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.457554  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.457980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.958087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.457764  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.457837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.458126  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:07.958131  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:08.457790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.457867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.458190  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:08.957583  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.958018  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.457986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.957661  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:10.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.458044  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:10.458120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:10.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.958069  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.457925  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.458005  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.458337  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.957987  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.457716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.957844  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.958153  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:12.958206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:13.457572  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.457652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:13.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.957752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.458033  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.957980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:15.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.457800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.458149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:15.458206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.958356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.458302  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.458374  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.458653  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.958451  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.958529  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.958870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.457741  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.957571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.958005  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:17.958058  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:18.457731  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.457820  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.458202  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:18.957933  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.958011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.457582  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.457658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.457973  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.958037  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:19.958084  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:20.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.457726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:20.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.957830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.458132  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.458454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.958169  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.958248  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.958614  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:21.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:22.458387  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.458712  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:22.958495  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.958574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.958894  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.957931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:24.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:24.458117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:24.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.958072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.458023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.958118  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.958454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:26.458388  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.458463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:26.458879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:26.958476  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.958814  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.458579  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.458656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.458987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.957727  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.957802  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.958162  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.458439  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.458510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.458774  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.958512  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.958589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.958911  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:28.958974  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:29.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:29.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.958161  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.457641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.458083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.958024  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:31.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.458012  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.458336  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:31.458388  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:31.958144  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.958581  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.458466  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.458569  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.458930  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.957985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.957814  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.957889  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.958221  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:33.958279  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:34.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.457651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:34.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.957724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.457792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.457876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.958034  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.958104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.958369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:35.958411  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:36.458355  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.458432  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.458815  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:36.957543  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.957626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.957947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.457995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.957635  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:38.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.458116  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:38.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:38.957684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.957762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.957975  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.958305  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.457659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:40.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:41.457945  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.458029  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.458375  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:41.958149  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.958218  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.958489  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.458344  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.458797  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.957548  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.958002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:43.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:43.458139  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:43.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.457863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.458214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.957493  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.957567  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.457549  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.457634  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.957790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.958220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:45.958281  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:46.458047  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.458139  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.458408  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:46.958199  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.958280  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.958672  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.458502  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.458578  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.458923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.957667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.958000  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:48.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:48.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:48.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.457750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.458132  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.957700  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:50.457775  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.457853  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.458187  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:50.458247  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:50.957570  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.957642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.957959  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.457904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.458001  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.458321  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.457677  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.458071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:52.958126  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:53.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:53.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.457816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.458178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.957898  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.958335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:54.958392  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:55.457874  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.457957  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.461901  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:34:55.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.957835  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.958180  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.458205  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.458289  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:56.458646  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.958348  398903 node_ready.go:38] duration metric: took 6m0.000942014s for node "functional-261311" to be "Ready" ...
	I1212 20:34:56.961249  398903 out.go:203] 
	W1212 20:34:56.963984  398903 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:34:56.964005  398903 out.go:285] * 
	W1212 20:34:56.966156  398903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:34:56.969023  398903 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645916855Z" level=info msg="Using the internal default seccomp profile"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645924379Z" level=info msg="AppArmor is disabled by the system or at CRI-O build-time"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645930903Z" level=info msg="No blockio config file specified, blockio not configured"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645936753Z" level=info msg="RDT not available in the host system"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.645950013Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.646683583Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.646710381Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.64672831Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.647590316Z" level=info msg="Conmon does support the --sync option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.647612594Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.647752583Z" level=info msg="Updated default CNI network name to "
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.648322057Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oc
i/hooks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"cgroupfs\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n
uid_mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_
memory = \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_d
ir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [c
rio.nri]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.648859975Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.648918872Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697369859Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697535129Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697630006Z" level=info msg="Create NRI interface"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697796219Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697818832Z" level=info msg="runtime interface created"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697832838Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697839345Z" level=info msg="runtime interface starting up..."
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697845639Z" level=info msg="starting plugins..."
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697862041Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:28:54 functional-261311 crio[5365]: time="2025-12-12T20:28:54.697933098Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:28:54 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:35:01.917403    8738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:01.917841    8738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:01.919391    8738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:01.919951    8738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:01.921241    8738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:35:01 up  3:17,  0 user,  load average: 0.37, 0.31, 0.91
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:34:59 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:34:59 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1140.
	Dec 12 20:34:59 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:34:59 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:00 functional-261311 kubelet[8611]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:00 functional-261311 kubelet[8611]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:00 functional-261311 kubelet[8611]: E1212 20:35:00.013050    8611 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:00 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:00 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:00 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1141.
	Dec 12 20:35:00 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:00 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:00 functional-261311 kubelet[8630]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:00 functional-261311 kubelet[8630]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:00 functional-261311 kubelet[8630]: E1212 20:35:00.769393    8630 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:00 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:00 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:01 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1142.
	Dec 12 20:35:01 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:01 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:01 functional-261311 kubelet[8651]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:01 functional-261311 kubelet[8651]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:01 functional-261311 kubelet[8651]: E1212 20:35:01.520498    8651 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:01 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:01 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (372.604129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (2.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 kubectl -- --context functional-261311 get pods
functional_test.go:731: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 kubectl -- --context functional-261311 get pods: exit status 1 (115.263532ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-linux-arm64 -p functional-261311 kubectl -- --context functional-261311 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (337.011572ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 logs -n 25: (1.049663954s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205528 image ls --format json --alsologtostderr                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr                                            │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format table --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ delete         │ -p functional-205528                                                                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ start          │ -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ start          │ -p functional-261311 --alsologtostderr -v=8                                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:28 UTC │                     │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:latest                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add minikube-local-cache-test:functional-261311                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache delete minikube-local-cache-test:functional-261311                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl images                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ cache          │ functional-261311 cache reload                                                                                                                    │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ kubectl        │ functional-261311 kubectl -- --context functional-261311 get pods                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:28:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:28:51.200639  398903 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:28:51.200813  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.200825  398903 out.go:374] Setting ErrFile to fd 2...
	I1212 20:28:51.200844  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.201121  398903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:28:51.201526  398903 out.go:368] Setting JSON to false
	I1212 20:28:51.202423  398903 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11484,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:28:51.202499  398903 start.go:143] virtualization:  
	I1212 20:28:51.205894  398903 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:28:51.209621  398903 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:28:51.209743  398903 notify.go:221] Checking for updates...
	I1212 20:28:51.215382  398903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:28:51.218267  398903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:51.221168  398903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:28:51.224043  398903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:28:51.227018  398903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:28:51.230467  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:51.230581  398903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:28:51.269738  398903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:28:51.269857  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.341809  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.330621143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.341929  398903 docker.go:319] overlay module found
	I1212 20:28:51.347026  398903 out.go:179] * Using the docker driver based on existing profile
	I1212 20:28:51.349898  398903 start.go:309] selected driver: docker
	I1212 20:28:51.349928  398903 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.350015  398903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:28:51.350136  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.408041  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.398420734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.408534  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:51.408600  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:51.408656  398903 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.413511  398903 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:28:51.416491  398903 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:28:51.419403  398903 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:28:51.422306  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:51.422357  398903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:28:51.422368  398903 cache.go:65] Caching tarball of preloaded images
	I1212 20:28:51.422458  398903 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:28:51.422471  398903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:28:51.422591  398903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:28:51.422818  398903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:28:51.441630  398903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:28:51.441653  398903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:28:51.441676  398903 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:28:51.441708  398903 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:28:51.441778  398903 start.go:364] duration metric: took 45.9µs to acquireMachinesLock for "functional-261311"
	I1212 20:28:51.441803  398903 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:28:51.441812  398903 fix.go:54] fixHost starting: 
	I1212 20:28:51.442073  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:51.469956  398903 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:28:51.469989  398903 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:28:51.473238  398903 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:28:51.473304  398903 machine.go:94] provisionDockerMachine start ...
	I1212 20:28:51.473396  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.494630  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.494961  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.494976  398903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:28:51.648147  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.648174  398903 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:28:51.648237  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.668778  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.669090  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.669106  398903 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:28:51.829776  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.829853  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.848648  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.848971  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.848987  398903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:28:52.002627  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:28:52.002659  398903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:28:52.002689  398903 ubuntu.go:190] setting up certificates
	I1212 20:28:52.002713  398903 provision.go:84] configureAuth start
	I1212 20:28:52.002795  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:52.023958  398903 provision.go:143] copyHostCerts
	I1212 20:28:52.024006  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024050  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:28:52.024064  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024145  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:28:52.024243  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024271  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:28:52.024280  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024310  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:28:52.024357  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024421  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:28:52.024431  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024463  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:28:52.024521  398903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:28:52.567706  398903 provision.go:177] copyRemoteCerts
	I1212 20:28:52.567776  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:28:52.567821  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.585858  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:52.692768  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:28:52.692828  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:28:52.711466  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:28:52.711534  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:28:52.730742  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:28:52.730815  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:28:52.749109  398903 provision.go:87] duration metric: took 746.363484ms to configureAuth
	I1212 20:28:52.749138  398903 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:28:52.749373  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:52.749480  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.767233  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:52.767548  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:52.767570  398903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:28:53.124031  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:28:53.124063  398903 machine.go:97] duration metric: took 1.650735569s to provisionDockerMachine
	I1212 20:28:53.124076  398903 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:28:53.124090  398903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:28:53.124184  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:28:53.124249  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.144150  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.248393  398903 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:28:53.251578  398903 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 20:28:53.251600  398903 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 20:28:53.251605  398903 command_runner.go:130] > VERSION_ID="12"
	I1212 20:28:53.251610  398903 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 20:28:53.251614  398903 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 20:28:53.251618  398903 command_runner.go:130] > ID=debian
	I1212 20:28:53.251623  398903 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 20:28:53.251629  398903 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 20:28:53.251634  398903 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 20:28:53.251713  398903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:28:53.251736  398903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:28:53.251748  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:28:53.251809  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:28:53.251889  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:28:53.251900  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:28:53.251976  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:28:53.251984  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> /etc/test/nested/copy/364853/hosts
	I1212 20:28:53.252026  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:28:53.259320  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:53.277130  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:28:53.294238  398903 start.go:296] duration metric: took 170.145848ms for postStartSetup
	I1212 20:28:53.294390  398903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:28:53.294470  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.312603  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.412930  398903 command_runner.go:130] > 11%
	I1212 20:28:53.413464  398903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:28:53.417828  398903 command_runner.go:130] > 174G
	I1212 20:28:53.418334  398903 fix.go:56] duration metric: took 1.976518079s for fixHost
	I1212 20:28:53.418383  398903 start.go:83] releasing machines lock for "functional-261311", held for 1.976583573s
	I1212 20:28:53.418465  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:53.435134  398903 ssh_runner.go:195] Run: cat /version.json
	I1212 20:28:53.435190  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.435445  398903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:28:53.435511  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.452987  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.462005  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.555880  398903 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 20:28:53.556060  398903 ssh_runner.go:195] Run: systemctl --version
	I1212 20:28:53.643428  398903 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:28:53.646219  398903 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 20:28:53.646272  398903 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 20:28:53.646362  398903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:28:53.685489  398903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:28:53.690919  398903 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:28:53.690960  398903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:28:53.691016  398903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:28:53.699790  398903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:28:53.699851  398903 start.go:496] detecting cgroup driver to use...
	I1212 20:28:53.699883  398903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:28:53.699937  398903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:28:53.716256  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:28:53.731380  398903 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:28:53.731442  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:28:53.747947  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:28:53.763704  398903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:28:53.877723  398903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:28:53.997385  398903 docker.go:234] disabling docker service ...
	I1212 20:28:53.997457  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:28:54.016313  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:28:54.032112  398903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:28:54.157667  398903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:28:54.273189  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:28:54.288211  398903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:28:54.301284  398903 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:28:54.302509  398903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:28:54.302613  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.311343  398903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:28:54.311460  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.320776  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.330058  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.340191  398903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:28:54.348326  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.357164  398903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.365464  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.374528  398903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:28:54.381778  398903 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 20:28:54.382795  398903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:28:54.390360  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:54.529224  398903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:28:54.703666  398903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:28:54.703740  398903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:28:54.707780  398903 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:28:54.707808  398903 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:28:54.707826  398903 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1212 20:28:54.707834  398903 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:54.707840  398903 command_runner.go:130] > Access: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707850  398903 command_runner.go:130] > Modify: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707858  398903 command_runner.go:130] > Change: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707861  398903 command_runner.go:130] >  Birth: -
	I1212 20:28:54.707934  398903 start.go:564] Will wait 60s for crictl version
	I1212 20:28:54.708017  398903 ssh_runner.go:195] Run: which crictl
	I1212 20:28:54.711729  398903 command_runner.go:130] > /usr/local/bin/crictl
	I1212 20:28:54.711909  398903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:28:54.737852  398903 command_runner.go:130] > Version:  0.1.0
	I1212 20:28:54.737888  398903 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:28:54.737895  398903 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 20:28:54.737901  398903 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:28:54.740042  398903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:28:54.740184  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.769676  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.769713  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.769720  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.769725  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.769750  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.769764  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.769768  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.769788  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.769802  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.769806  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.769810  398903 command_runner.go:130] >      static
	I1212 20:28:54.769813  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.769832  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.769838  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.769842  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.769849  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.769852  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.769859  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.769867  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.769872  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.769969  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.796781  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.796850  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.796873  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.796896  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.796933  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.796961  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.796982  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.797005  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.797036  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.797055  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.797071  398903 command_runner.go:130] >      static
	I1212 20:28:54.797089  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.797108  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.797151  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.797177  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.797197  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.797231  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.797262  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.797290  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.797309  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.804038  398903 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:28:54.806949  398903 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:28:54.823441  398903 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:28:54.827623  398903 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 20:28:54.827865  398903 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:28:54.827977  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:54.828031  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.860175  398903 command_runner.go:130] > {
	I1212 20:28:54.860197  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.860201  398903 command_runner.go:130] >     {
	I1212 20:28:54.860214  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.860219  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860225  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.860229  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860233  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860242  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.860250  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.860254  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860258  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.860263  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860270  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860274  398903 command_runner.go:130] >     },
	I1212 20:28:54.860277  398903 command_runner.go:130] >     {
	I1212 20:28:54.860285  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.860289  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860295  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.860298  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860302  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860310  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.860333  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.860341  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860346  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.860350  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860357  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860360  398903 command_runner.go:130] >     },
	I1212 20:28:54.860363  398903 command_runner.go:130] >     {
	I1212 20:28:54.860391  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.860396  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860401  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.860404  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860408  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860417  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.860425  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.860428  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860434  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.860439  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.860443  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860447  398903 command_runner.go:130] >     },
	I1212 20:28:54.860456  398903 command_runner.go:130] >     {
	I1212 20:28:54.860463  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.860467  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860472  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.860478  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860482  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860490  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.860497  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.860505  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860510  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.860513  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860517  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860521  398903 command_runner.go:130] >       },
	I1212 20:28:54.860530  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860534  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860540  398903 command_runner.go:130] >     },
	I1212 20:28:54.860546  398903 command_runner.go:130] >     {
	I1212 20:28:54.860552  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.860558  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860564  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.860567  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860577  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860594  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.860603  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.860610  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860614  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.860618  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860622  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860625  398903 command_runner.go:130] >       },
	I1212 20:28:54.860630  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860636  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860639  398903 command_runner.go:130] >     },
	I1212 20:28:54.860643  398903 command_runner.go:130] >     {
	I1212 20:28:54.860652  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.860659  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860665  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.860668  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860672  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860684  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.860695  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.860698  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860702  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.860706  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860711  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860717  398903 command_runner.go:130] >       },
	I1212 20:28:54.860721  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860726  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860739  398903 command_runner.go:130] >     },
	I1212 20:28:54.860747  398903 command_runner.go:130] >     {
	I1212 20:28:54.860754  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.860760  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860766  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.860769  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860773  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860781  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.860792  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.860796  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860801  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.860807  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860811  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860817  398903 command_runner.go:130] >     },
	I1212 20:28:54.860820  398903 command_runner.go:130] >     {
	I1212 20:28:54.860827  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.860831  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860839  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.860844  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860854  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860863  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.860876  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.860883  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860887  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.860891  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860895  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860905  398903 command_runner.go:130] >       },
	I1212 20:28:54.860908  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860912  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860922  398903 command_runner.go:130] >     },
	I1212 20:28:54.860925  398903 command_runner.go:130] >     {
	I1212 20:28:54.860932  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.860938  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860944  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.860948  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860953  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860961  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.860971  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.860975  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860979  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.860984  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860991  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.860994  398903 command_runner.go:130] >       },
	I1212 20:28:54.861000  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.861004  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.861014  398903 command_runner.go:130] >     }
	I1212 20:28:54.861017  398903 command_runner.go:130] >   ]
	I1212 20:28:54.861020  398903 command_runner.go:130] > }
	I1212 20:28:54.861204  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.861218  398903 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:28:54.861275  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.883482  398903 command_runner.go:130] > {
	I1212 20:28:54.883501  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.883506  398903 command_runner.go:130] >     {
	I1212 20:28:54.883514  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.883520  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883526  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.883529  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883533  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883547  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.883556  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.883560  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883564  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.883568  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883574  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883577  398903 command_runner.go:130] >     },
	I1212 20:28:54.883580  398903 command_runner.go:130] >     {
	I1212 20:28:54.883587  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.883591  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883597  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.883600  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883604  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883612  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.883620  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.883624  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883628  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.883632  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883638  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883641  398903 command_runner.go:130] >     },
	I1212 20:28:54.883645  398903 command_runner.go:130] >     {
	I1212 20:28:54.883652  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.883656  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883663  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.883666  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883670  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883679  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.883687  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.883690  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883695  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.883699  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.883702  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883706  398903 command_runner.go:130] >     },
	I1212 20:28:54.883712  398903 command_runner.go:130] >     {
	I1212 20:28:54.883719  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.883723  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883728  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.883733  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883737  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883745  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.883752  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.883756  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883759  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.883763  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883767  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883770  398903 command_runner.go:130] >       },
	I1212 20:28:54.883778  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883783  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883786  398903 command_runner.go:130] >     },
	I1212 20:28:54.883788  398903 command_runner.go:130] >     {
	I1212 20:28:54.883795  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.883798  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883804  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.883807  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883811  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883819  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.883827  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.883830  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883834  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.883838  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883842  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883845  398903 command_runner.go:130] >       },
	I1212 20:28:54.883854  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883858  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883861  398903 command_runner.go:130] >     },
	I1212 20:28:54.883864  398903 command_runner.go:130] >     {
	I1212 20:28:54.883874  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.883878  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883884  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.883888  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883891  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883899  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.883908  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.883911  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883915  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.883919  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883923  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883926  398903 command_runner.go:130] >       },
	I1212 20:28:54.883930  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883935  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883938  398903 command_runner.go:130] >     },
	I1212 20:28:54.883942  398903 command_runner.go:130] >     {
	I1212 20:28:54.883949  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.883952  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883958  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.883961  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883965  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883973  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.883981  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.883983  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883988  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.883991  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883995  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883999  398903 command_runner.go:130] >     },
	I1212 20:28:54.884002  398903 command_runner.go:130] >     {
	I1212 20:28:54.884008  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.884012  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884017  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.884020  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884030  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884038  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.884055  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.884061  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884064  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.884068  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884072  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.884075  398903 command_runner.go:130] >       },
	I1212 20:28:54.884079  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884082  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.884085  398903 command_runner.go:130] >     },
	I1212 20:28:54.884088  398903 command_runner.go:130] >     {
	I1212 20:28:54.884095  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.884099  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884103  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.884106  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884110  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884118  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.884125  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.884129  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884133  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.884137  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884141  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.884145  398903 command_runner.go:130] >       },
	I1212 20:28:54.884149  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884152  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.884155  398903 command_runner.go:130] >     }
	I1212 20:28:54.884158  398903 command_runner.go:130] >   ]
	I1212 20:28:54.884161  398903 command_runner.go:130] > }
	I1212 20:28:54.885632  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.885655  398903 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:28:54.885663  398903 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:28:54.885778  398903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:28:54.885868  398903 ssh_runner.go:195] Run: crio config
	I1212 20:28:54.934221  398903 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:28:54.934247  398903 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:28:54.934255  398903 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:28:54.934259  398903 command_runner.go:130] > #
	I1212 20:28:54.934288  398903 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:28:54.934303  398903 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:28:54.934310  398903 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:28:54.934320  398903 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:28:54.934324  398903 command_runner.go:130] > # reload'.
	I1212 20:28:54.934331  398903 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:28:54.934341  398903 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:28:54.934347  398903 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:28:54.934369  398903 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:28:54.934379  398903 command_runner.go:130] > [crio]
	I1212 20:28:54.934386  398903 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:28:54.934403  398903 command_runner.go:130] > # containers images, in this directory.
	I1212 20:28:54.934708  398903 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 20:28:54.934725  398903 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:28:54.935118  398903 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 20:28:54.935167  398903 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 20:28:54.935270  398903 command_runner.go:130] > # imagestore = ""
	I1212 20:28:54.935280  398903 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:28:54.935288  398903 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:28:54.935534  398903 command_runner.go:130] > # storage_driver = "overlay"
	I1212 20:28:54.935547  398903 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:28:54.935554  398903 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:28:54.935682  398903 command_runner.go:130] > # storage_option = [
	I1212 20:28:54.935790  398903 command_runner.go:130] > # ]
	I1212 20:28:54.935801  398903 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:28:54.935808  398903 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:28:54.935977  398903 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:28:54.935987  398903 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:28:54.936004  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:28:54.936009  398903 command_runner.go:130] > # always happen on a node reboot
	I1212 20:28:54.936228  398903 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:28:54.936250  398903 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:28:54.936257  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:28:54.936263  398903 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:28:54.936389  398903 command_runner.go:130] > # version_file_persist = ""
	I1212 20:28:54.936402  398903 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:28:54.936411  398903 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:28:54.937698  398903 command_runner.go:130] > # internal_wipe = true
	I1212 20:28:54.937721  398903 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 20:28:54.937728  398903 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 20:28:54.937860  398903 command_runner.go:130] > # internal_repair = true
	I1212 20:28:54.937871  398903 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:28:54.937878  398903 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:28:54.937885  398903 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:28:54.938097  398903 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:28:54.938132  398903 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:28:54.938152  398903 command_runner.go:130] > [crio.api]
	I1212 20:28:54.938172  398903 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:28:54.938284  398903 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:28:54.938314  398903 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:28:54.938521  398903 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:28:54.938555  398903 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:28:54.938577  398903 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:28:54.938680  398903 command_runner.go:130] > # stream_port = "0"
	I1212 20:28:54.938717  398903 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:28:54.938951  398903 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:28:54.938995  398903 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:28:54.939084  398903 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:28:54.939113  398903 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:28:54.939142  398903 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939249  398903 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:28:54.939291  398903 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:28:54.939312  398903 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939622  398903 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:28:54.939657  398903 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:28:54.939704  398903 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:28:54.939736  398903 command_runner.go:130] > # automatically pick up the changes.
	I1212 20:28:54.939811  398903 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:28:54.939858  398903 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940308  398903 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 20:28:54.940353  398903 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940776  398903 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 20:28:54.940788  398903 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:28:54.940801  398903 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:28:54.940806  398903 command_runner.go:130] > [crio.runtime]
	I1212 20:28:54.940824  398903 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:28:54.940830  398903 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:28:54.940834  398903 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:28:54.940840  398903 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:28:54.940969  398903 command_runner.go:130] > # default_ulimits = [
	I1212 20:28:54.941191  398903 command_runner.go:130] > # ]
	I1212 20:28:54.941204  398903 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:28:54.941558  398903 command_runner.go:130] > # no_pivot = false
	I1212 20:28:54.941568  398903 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:28:54.941575  398903 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:28:54.941945  398903 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:28:54.941956  398903 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:28:54.941961  398903 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:28:54.942013  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942279  398903 command_runner.go:130] > # conmon = ""
	I1212 20:28:54.942287  398903 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:28:54.942295  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:28:54.942500  398903 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:28:54.942511  398903 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:28:54.942545  398903 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:28:54.942582  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942706  398903 command_runner.go:130] > # conmon_env = [
	I1212 20:28:54.942961  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943022  398903 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:28:54.943043  398903 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:28:54.943084  398903 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:28:54.943203  398903 command_runner.go:130] > # default_env = [
	I1212 20:28:54.943456  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943514  398903 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:28:54.943537  398903 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 20:28:54.943931  398903 command_runner.go:130] > # selinux = false
	I1212 20:28:54.943943  398903 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:28:54.943997  398903 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 20:28:54.944007  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944219  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.944231  398903 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 20:28:54.944237  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944517  398903 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 20:28:54.944529  398903 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:28:54.944536  398903 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:28:54.944595  398903 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:28:54.944603  398903 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:28:54.944609  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944908  398903 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:28:54.944919  398903 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:28:54.944924  398903 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:28:54.945253  398903 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:28:54.945265  398903 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 20:28:54.945309  398903 command_runner.go:130] > # blockio parameters.
	I1212 20:28:54.945663  398903 command_runner.go:130] > # blockio_reload = false
	I1212 20:28:54.945676  398903 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:28:54.945725  398903 command_runner.go:130] > # irqbalance daemon.
	I1212 20:28:54.946100  398903 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:28:54.946111  398903 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 20:28:54.946174  398903 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 20:28:54.946186  398903 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 20:28:54.946547  398903 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 20:28:54.946561  398903 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:28:54.946567  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.946867  398903 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:28:54.946878  398903 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:28:54.947089  398903 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:28:54.947100  398903 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:28:54.947442  398903 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:28:54.947454  398903 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:28:54.947513  398903 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:28:54.947527  398903 command_runner.go:130] > # will be added.
	I1212 20:28:54.947601  398903 command_runner.go:130] > # default_capabilities = [
	I1212 20:28:54.947867  398903 command_runner.go:130] > # 	"CHOWN",
	I1212 20:28:54.948094  398903 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:28:54.948277  398903 command_runner.go:130] > # 	"FSETID",
	I1212 20:28:54.948500  398903 command_runner.go:130] > # 	"FOWNER",
	I1212 20:28:54.948701  398903 command_runner.go:130] > # 	"SETGID",
	I1212 20:28:54.948883  398903 command_runner.go:130] > # 	"SETUID",
	I1212 20:28:54.949109  398903 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:28:54.949307  398903 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:28:54.949502  398903 command_runner.go:130] > # 	"KILL",
	I1212 20:28:54.949671  398903 command_runner.go:130] > # ]
	I1212 20:28:54.949741  398903 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 20:28:54.949814  398903 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 20:28:54.950073  398903 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 20:28:54.950143  398903 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:28:54.950211  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.950289  398903 command_runner.go:130] > default_sysctls = [
	I1212 20:28:54.950330  398903 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 20:28:54.950370  398903 command_runner.go:130] > ]
	I1212 20:28:54.950439  398903 command_runner.go:130] > # List of devices on the host that a
	I1212 20:28:54.950465  398903 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:28:54.950518  398903 command_runner.go:130] > # allowed_devices = [
	I1212 20:28:54.950672  398903 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:28:54.950902  398903 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 20:28:54.951150  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951221  398903 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:28:54.951244  398903 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:28:54.951280  398903 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:28:54.951306  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.951324  398903 command_runner.go:130] > # additional_devices = [
	I1212 20:28:54.951343  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951424  398903 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:28:54.951503  398903 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:28:54.951521  398903 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:28:54.951592  398903 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:28:54.951609  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951651  398903 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:28:54.951672  398903 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:28:54.951689  398903 command_runner.go:130] > # Defaults to false.
	I1212 20:28:54.951751  398903 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:28:54.951809  398903 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:28:54.951879  398903 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:28:54.951906  398903 command_runner.go:130] > # hooks_dir = [
	I1212 20:28:54.951934  398903 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:28:54.951952  398903 command_runner.go:130] > # ]
	I1212 20:28:54.952010  398903 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:28:54.952049  398903 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:28:54.952097  398903 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:28:54.952138  398903 command_runner.go:130] > #
	I1212 20:28:54.952160  398903 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:28:54.952191  398903 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:28:54.952262  398903 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:28:54.952281  398903 command_runner.go:130] > #
	I1212 20:28:54.952324  398903 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:28:54.952346  398903 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:28:54.952404  398903 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:28:54.952491  398903 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:28:54.952529  398903 command_runner.go:130] > #
	I1212 20:28:54.952568  398903 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:28:54.952602  398903 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:28:54.952623  398903 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:28:54.952643  398903 command_runner.go:130] > # pids_limit = -1
	I1212 20:28:54.952677  398903 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:28:54.952708  398903 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:28:54.952837  398903 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:28:54.952892  398903 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:28:54.952911  398903 command_runner.go:130] > # log_size_max = -1
	I1212 20:28:54.952955  398903 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 20:28:54.953009  398903 command_runner.go:130] > # log_to_journald = false
	I1212 20:28:54.953062  398903 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:28:54.953088  398903 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:28:54.953123  398903 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:28:54.953149  398903 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:28:54.953170  398903 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:28:54.953206  398903 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:28:54.953299  398903 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:28:54.953339  398903 command_runner.go:130] > # read_only = false
	I1212 20:28:54.953359  398903 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:28:54.953395  398903 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:28:54.953418  398903 command_runner.go:130] > # live configuration reload.
	I1212 20:28:54.953436  398903 command_runner.go:130] > # log_level = "info"
	I1212 20:28:54.953472  398903 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:28:54.953562  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.953601  398903 command_runner.go:130] > # log_filter = ""
	I1212 20:28:54.953622  398903 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953643  398903 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:28:54.953675  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953712  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953763  398903 command_runner.go:130] > # uid_mappings = ""
	I1212 20:28:54.953804  398903 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953825  398903 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:28:54.953843  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953907  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953931  398903 command_runner.go:130] > # gid_mappings = ""
	I1212 20:28:54.953969  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:28:54.954021  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954062  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954085  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954103  398903 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:28:54.954162  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:28:54.954184  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954234  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954322  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954363  398903 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:28:54.954382  398903 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:28:54.954423  398903 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:28:54.954443  398903 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:28:54.954461  398903 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:28:54.954533  398903 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:28:54.954586  398903 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:28:54.954623  398903 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:28:54.954643  398903 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:28:54.954683  398903 command_runner.go:130] > # drop_infra_ctr = true
	I1212 20:28:54.954704  398903 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:28:54.954737  398903 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:28:54.954797  398903 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:28:54.954876  398903 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:28:54.954917  398903 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 20:28:54.954947  398903 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 20:28:54.954967  398903 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 20:28:54.955001  398903 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 20:28:54.955088  398903 command_runner.go:130] > # shared_cpuset = ""
	I1212 20:28:54.955124  398903 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:28:54.955160  398903 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:28:54.955179  398903 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:28:54.955201  398903 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:28:54.955242  398903 command_runner.go:130] > # pinns_path = ""
	I1212 20:28:54.955301  398903 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 20:28:54.955365  398903 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 20:28:54.955383  398903 command_runner.go:130] > # enable_criu_support = true
	I1212 20:28:54.955425  398903 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 20:28:54.955447  398903 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 20:28:54.955466  398903 command_runner.go:130] > # enable_pod_events = false
	I1212 20:28:54.955506  398903 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:28:54.955594  398903 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 20:28:54.955624  398903 command_runner.go:130] > # default_runtime = "crun"
	I1212 20:28:54.955661  398903 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:28:54.955697  398903 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:28:54.955721  398903 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 20:28:54.955790  398903 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:28:54.955868  398903 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:28:54.955891  398903 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:28:54.955927  398903 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:28:54.955946  398903 command_runner.go:130] > # ]
	I1212 20:28:54.955966  398903 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:28:54.956007  398903 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:28:54.956057  398903 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 20:28:54.956117  398903 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 20:28:54.956136  398903 command_runner.go:130] > #
	I1212 20:28:54.956299  398903 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 20:28:54.956391  398903 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 20:28:54.956423  398903 command_runner.go:130] > # runtime_type = "oci"
	I1212 20:28:54.956443  398903 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 20:28:54.956476  398903 command_runner.go:130] > # inherit_default_runtime = false
	I1212 20:28:54.956515  398903 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 20:28:54.956535  398903 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 20:28:54.956555  398903 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 20:28:54.956602  398903 command_runner.go:130] > # monitor_env = []
	I1212 20:28:54.956632  398903 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 20:28:54.956651  398903 command_runner.go:130] > # allowed_annotations = []
	I1212 20:28:54.956673  398903 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 20:28:54.956703  398903 command_runner.go:130] > # no_sync_log = false
	I1212 20:28:54.956730  398903 command_runner.go:130] > # default_annotations = {}
	I1212 20:28:54.956749  398903 command_runner.go:130] > # stream_websockets = false
	I1212 20:28:54.956770  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.956828  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.956858  398903 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 20:28:54.956879  398903 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 20:28:54.956902  398903 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:28:54.956934  398903 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:28:54.956956  398903 command_runner.go:130] > #   in $PATH.
	I1212 20:28:54.956979  398903 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 20:28:54.957012  398903 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:28:54.957045  398903 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 20:28:54.957066  398903 command_runner.go:130] > #   state.
	I1212 20:28:54.957088  398903 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:28:54.957122  398903 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:28:54.957146  398903 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 20:28:54.957169  398903 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 20:28:54.957202  398903 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 20:28:54.957227  398903 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:28:54.957250  398903 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:28:54.957281  398903 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:28:54.957305  398903 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:28:54.957327  398903 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:28:54.957359  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:28:54.957385  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:28:54.957408  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:28:54.957450  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:28:54.957471  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:28:54.957498  398903 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:28:54.957534  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 20:28:54.957557  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 20:28:54.957580  398903 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:28:54.957613  398903 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 20:28:54.957636  398903 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 20:28:54.957657  398903 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 20:28:54.957689  398903 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 20:28:54.957712  398903 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 20:28:54.957733  398903 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 20:28:54.957769  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 20:28:54.957795  398903 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 20:28:54.957816  398903 command_runner.go:130] > #   deprecated option "conmon".
	I1212 20:28:54.957848  398903 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 20:28:54.957870  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 20:28:54.957893  398903 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 20:28:54.957923  398903 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:28:54.957949  398903 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 20:28:54.957971  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 20:28:54.958007  398903 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 20:28:54.958030  398903 command_runner.go:130] > #   conmon-rs by using:
	I1212 20:28:54.958053  398903 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 20:28:54.958092  398903 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 20:28:54.958133  398903 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 20:28:54.958204  398903 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 20:28:54.958225  398903 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 20:28:54.958278  398903 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 20:28:54.958303  398903 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 20:28:54.958340  398903 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 20:28:54.958372  398903 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 20:28:54.958415  398903 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 20:28:54.958449  398903 command_runner.go:130] > #   when a machine crash happens.
	I1212 20:28:54.958472  398903 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 20:28:54.958496  398903 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 20:28:54.958530  398903 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 20:28:54.958560  398903 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 20:28:54.958583  398903 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 20:28:54.958606  398903 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 20:28:54.958635  398903 command_runner.go:130] > #
	I1212 20:28:54.958656  398903 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 20:28:54.958676  398903 command_runner.go:130] > #
	I1212 20:28:54.958708  398903 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 20:28:54.958738  398903 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 20:28:54.958756  398903 command_runner.go:130] > #
	I1212 20:28:54.958778  398903 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 20:28:54.958809  398903 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 20:28:54.958834  398903 command_runner.go:130] > #
	I1212 20:28:54.958854  398903 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 20:28:54.958874  398903 command_runner.go:130] > # feature.
	I1212 20:28:54.958903  398903 command_runner.go:130] > #
	I1212 20:28:54.958934  398903 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 20:28:54.958955  398903 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 20:28:54.958978  398903 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 20:28:54.959015  398903 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 20:28:54.959041  398903 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 20:28:54.959060  398903 command_runner.go:130] > #
	I1212 20:28:54.959092  398903 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 20:28:54.959116  398903 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 20:28:54.959135  398903 command_runner.go:130] > #
	I1212 20:28:54.959166  398903 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 20:28:54.959195  398903 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 20:28:54.959213  398903 command_runner.go:130] > #
	I1212 20:28:54.959234  398903 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 20:28:54.959264  398903 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 20:28:54.959290  398903 command_runner.go:130] > # limitation.
	I1212 20:28:54.959309  398903 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 20:28:54.959329  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 20:28:54.959363  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959390  398903 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 20:28:54.959409  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959429  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959460  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959486  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959503  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959521  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959541  398903 command_runner.go:130] > allowed_annotations = [
	I1212 20:28:54.959574  398903 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 20:28:54.959593  398903 command_runner.go:130] > ]
	I1212 20:28:54.959612  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959644  398903 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:28:54.959671  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 20:28:54.959688  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959705  398903 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:28:54.959727  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959762  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959780  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959800  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959819  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959855  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959872  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959894  398903 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:28:54.959924  398903 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:28:54.959953  398903 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:28:54.959976  398903 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:28:54.960002  398903 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 20:28:54.960047  398903 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 20:28:54.960072  398903 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 20:28:54.960106  398903 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:28:54.960135  398903 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:28:54.960156  398903 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:28:54.960176  398903 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:28:54.960207  398903 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:28:54.960236  398903 command_runner.go:130] > # Example:
	I1212 20:28:54.960257  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:28:54.960281  398903 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:28:54.960315  398903 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:28:54.960337  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:28:54.960356  398903 command_runner.go:130] > # cpuset = "0-1"
	I1212 20:28:54.960392  398903 command_runner.go:130] > # cpushares = "5"
	I1212 20:28:54.960413  398903 command_runner.go:130] > # cpuquota = "1000"
	I1212 20:28:54.960435  398903 command_runner.go:130] > # cpuperiod = "100000"
	I1212 20:28:54.960473  398903 command_runner.go:130] > # cpulimit = "35"
	I1212 20:28:54.960495  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.960507  398903 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:28:54.960516  398903 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:28:54.960522  398903 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:28:54.960542  398903 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:28:54.960555  398903 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:28:54.960563  398903 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:28:54.960568  398903 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 20:28:54.960575  398903 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 20:28:54.960579  398903 command_runner.go:130] > # Default value is set to true
	I1212 20:28:54.960595  398903 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 20:28:54.960602  398903 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 20:28:54.960613  398903 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 20:28:54.960618  398903 command_runner.go:130] > # Default value is set to 'false'
	I1212 20:28:54.960623  398903 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 20:28:54.960637  398903 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 20:28:54.960645  398903 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 20:28:54.960649  398903 command_runner.go:130] > # timezone = ""
	I1212 20:28:54.960656  398903 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:28:54.960661  398903 command_runner.go:130] > #
	I1212 20:28:54.960668  398903 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:28:54.960675  398903 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 20:28:54.960682  398903 command_runner.go:130] > [crio.image]
	I1212 20:28:54.960688  398903 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:28:54.960693  398903 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:28:54.960702  398903 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:28:54.960714  398903 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960719  398903 command_runner.go:130] > # global_auth_file = ""
	I1212 20:28:54.960724  398903 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:28:54.960730  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960738  398903 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.960745  398903 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:28:54.960758  398903 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960764  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960770  398903 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:28:54.960777  398903 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:28:54.960783  398903 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:28:54.960793  398903 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:28:54.960800  398903 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:28:54.960804  398903 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:28:54.960810  398903 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 20:28:54.960819  398903 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 20:28:54.960828  398903 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 20:28:54.960837  398903 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 20:28:54.960843  398903 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 20:28:54.960855  398903 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 20:28:54.960859  398903 command_runner.go:130] > # pinned_images = [
	I1212 20:28:54.960863  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960869  398903 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:28:54.960879  398903 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:28:54.960885  398903 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:28:54.960891  398903 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:28:54.960902  398903 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:28:54.960910  398903 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 20:28:54.960916  398903 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 20:28:54.960923  398903 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 20:28:54.960933  398903 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 20:28:54.960939  398903 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 20:28:54.960948  398903 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 20:28:54.960953  398903 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 20:28:54.960960  398903 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:28:54.960969  398903 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:28:54.960973  398903 command_runner.go:130] > # changing them here.
	I1212 20:28:54.960979  398903 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 20:28:54.960983  398903 command_runner.go:130] > # insecure_registries = [
	I1212 20:28:54.960986  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960995  398903 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:28:54.961006  398903 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:28:54.961012  398903 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:28:54.961020  398903 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:28:54.961026  398903 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:28:54.961032  398903 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 20:28:54.961042  398903 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 20:28:54.961046  398903 command_runner.go:130] > # auto_reload_registries = false
	I1212 20:28:54.961054  398903 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 20:28:54.961062  398903 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 20:28:54.961069  398903 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 20:28:54.961077  398903 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 20:28:54.961082  398903 command_runner.go:130] > # The mode of short name resolution.
	I1212 20:28:54.961089  398903 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 20:28:54.961100  398903 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 20:28:54.961105  398903 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 20:28:54.961112  398903 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 20:28:54.961118  398903 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 20:28:54.961124  398903 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 20:28:54.961132  398903 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 20:28:54.961138  398903 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:28:54.961142  398903 command_runner.go:130] > # CNI plugins.
	I1212 20:28:54.961146  398903 command_runner.go:130] > [crio.network]
	I1212 20:28:54.961152  398903 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:28:54.961159  398903 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:28:54.961164  398903 command_runner.go:130] > # cni_default_network = ""
	I1212 20:28:54.961171  398903 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:28:54.961179  398903 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:28:54.961185  398903 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:28:54.961189  398903 command_runner.go:130] > # plugin_dirs = [
	I1212 20:28:54.961195  398903 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:28:54.961198  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961209  398903 command_runner.go:130] > # List of included pod metrics.
	I1212 20:28:54.961213  398903 command_runner.go:130] > # included_pod_metrics = [
	I1212 20:28:54.961217  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961224  398903 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:28:54.961228  398903 command_runner.go:130] > [crio.metrics]
	I1212 20:28:54.961234  398903 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:28:54.961243  398903 command_runner.go:130] > # enable_metrics = false
	I1212 20:28:54.961248  398903 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:28:54.961253  398903 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:28:54.961262  398903 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:28:54.961271  398903 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:28:54.961280  398903 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:28:54.961285  398903 command_runner.go:130] > # metrics_collectors = [
	I1212 20:28:54.961291  398903 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:28:54.961296  398903 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 20:28:54.961302  398903 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:28:54.961306  398903 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:28:54.961311  398903 command_runner.go:130] > # 	"operations_total",
	I1212 20:28:54.961315  398903 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:28:54.961320  398903 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:28:54.961324  398903 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:28:54.961328  398903 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:28:54.961333  398903 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:28:54.961338  398903 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:28:54.961342  398903 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:28:54.961346  398903 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:28:54.961351  398903 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:28:54.961358  398903 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 20:28:54.961363  398903 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 20:28:54.961374  398903 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 20:28:54.961377  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961383  398903 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 20:28:54.961389  398903 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 20:28:54.961394  398903 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:28:54.961398  398903 command_runner.go:130] > # metrics_port = 9090
	I1212 20:28:54.961404  398903 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:28:54.961409  398903 command_runner.go:130] > # metrics_socket = ""
	I1212 20:28:54.961420  398903 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:28:54.961429  398903 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:28:54.961440  398903 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:28:54.961445  398903 command_runner.go:130] > # certificate on any modification event.
	I1212 20:28:54.961452  398903 command_runner.go:130] > # metrics_cert = ""
	I1212 20:28:54.961458  398903 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:28:54.961464  398903 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:28:54.961470  398903 command_runner.go:130] > # metrics_key = ""
	I1212 20:28:54.961476  398903 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:28:54.961480  398903 command_runner.go:130] > [crio.tracing]
	I1212 20:28:54.961487  398903 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:28:54.961491  398903 command_runner.go:130] > # enable_tracing = false
	I1212 20:28:54.961499  398903 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:28:54.961504  398903 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 20:28:54.961513  398903 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 20:28:54.961520  398903 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:28:54.961527  398903 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 20:28:54.961530  398903 command_runner.go:130] > [crio.nri]
	I1212 20:28:54.961534  398903 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 20:28:54.961544  398903 command_runner.go:130] > # enable_nri = true
	I1212 20:28:54.961548  398903 command_runner.go:130] > # NRI socket to listen on.
	I1212 20:28:54.961553  398903 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 20:28:54.961559  398903 command_runner.go:130] > # NRI plugin directory to use.
	I1212 20:28:54.961564  398903 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 20:28:54.961569  398903 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 20:28:54.961574  398903 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 20:28:54.961579  398903 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 20:28:54.961660  398903 command_runner.go:130] > # nri_disable_connections = false
	I1212 20:28:54.961672  398903 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 20:28:54.961678  398903 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 20:28:54.961683  398903 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 20:28:54.961689  398903 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 20:28:54.961696  398903 command_runner.go:130] > # NRI default validator configuration.
	I1212 20:28:54.961703  398903 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 20:28:54.961717  398903 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 20:28:54.961722  398903 command_runner.go:130] > # can be restricted/rejected:
	I1212 20:28:54.961728  398903 command_runner.go:130] > # - OCI hook injection
	I1212 20:28:54.961734  398903 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 20:28:54.961740  398903 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 20:28:54.961747  398903 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 20:28:54.961752  398903 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 20:28:54.961759  398903 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 20:28:54.961766  398903 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 20:28:54.961775  398903 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 20:28:54.961779  398903 command_runner.go:130] > #
	I1212 20:28:54.961783  398903 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 20:28:54.961791  398903 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 20:28:54.961796  398903 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 20:28:54.961802  398903 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 20:28:54.961810  398903 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 20:28:54.961815  398903 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 20:28:54.961821  398903 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 20:28:54.961828  398903 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 20:28:54.961831  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961838  398903 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 20:28:54.961845  398903 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:28:54.961851  398903 command_runner.go:130] > [crio.stats]
	I1212 20:28:54.961860  398903 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:28:54.961866  398903 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:28:54.961872  398903 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:28:54.961879  398903 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 20:28:54.961889  398903 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 20:28:54.961894  398903 command_runner.go:130] > # collection_period = 0
	I1212 20:28:54.961945  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912485774Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 20:28:54.961961  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912523214Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 20:28:54.961978  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912551908Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 20:28:54.961989  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912577237Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 20:28:54.962000  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912661332Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.962016  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912929282Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 20:28:54.962028  398903 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:28:54.962158  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:54.962172  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:54.962187  398903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:28:54.962211  398903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:28:54.962351  398903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:28:54.962430  398903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:28:54.969281  398903 command_runner.go:130] > kubeadm
	I1212 20:28:54.969300  398903 command_runner.go:130] > kubectl
	I1212 20:28:54.969304  398903 command_runner.go:130] > kubelet
	I1212 20:28:54.970141  398903 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:28:54.970208  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:28:54.977797  398903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:28:54.990948  398903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:28:55.010887  398903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 20:28:55.035195  398903 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:28:55.039688  398903 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 20:28:55.039770  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.162925  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:55.180455  398903 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:28:55.180486  398903 certs.go:195] generating shared ca certs ...
	I1212 20:28:55.180503  398903 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.180666  398903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:28:55.180714  398903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:28:55.180726  398903 certs.go:257] generating profile certs ...
	I1212 20:28:55.180830  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:28:55.180895  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:28:55.180950  398903 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:28:55.180963  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:28:55.180976  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:28:55.180993  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:28:55.181015  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:28:55.181034  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:28:55.181047  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:28:55.181062  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:28:55.181077  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:28:55.181130  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:28:55.181167  398903 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:28:55.181180  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:28:55.181208  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:28:55.181238  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:28:55.181263  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:28:55.181322  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:55.181358  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.181374  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.181387  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.181918  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:28:55.205330  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:28:55.228282  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:28:55.247851  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:28:55.266269  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:28:55.284183  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:28:55.302120  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:28:55.319891  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:28:55.338073  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:28:55.356708  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:28:55.374821  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:28:55.392459  398903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:28:55.405239  398903 ssh_runner.go:195] Run: openssl version
	I1212 20:28:55.411334  398903 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 20:28:55.411437  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.418985  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:28:55.426485  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430183  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430452  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430510  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.471108  398903 command_runner.go:130] > b5213941
	I1212 20:28:55.471637  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:28:55.479292  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.486905  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:28:55.494608  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498479  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498582  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498669  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.541933  398903 command_runner.go:130] > 51391683
	I1212 20:28:55.542454  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:28:55.550083  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.558343  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:28:55.567964  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571832  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571862  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571932  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.617329  398903 command_runner.go:130] > 3ec20f2e
	I1212 20:28:55.617911  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:28:55.625593  398903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629390  398903 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629419  398903 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 20:28:55.629427  398903 command_runner.go:130] > Device: 259,1	Inode: 1315224     Links: 1
	I1212 20:28:55.629433  398903 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:55.629439  398903 command_runner.go:130] > Access: 2025-12-12 20:24:47.845478497 +0000
	I1212 20:28:55.629445  398903 command_runner.go:130] > Modify: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629449  398903 command_runner.go:130] > Change: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629454  398903 command_runner.go:130] >  Birth: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629525  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:28:55.669986  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.670463  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:28:55.711204  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.711650  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:28:55.751880  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.752298  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:28:55.793260  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.793349  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:28:55.836082  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.836162  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:28:55.878637  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.879114  398903 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:55.879241  398903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:28:55.879321  398903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:28:55.906646  398903 cri.go:89] found id: ""
	I1212 20:28:55.906721  398903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:28:55.913746  398903 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 20:28:55.913771  398903 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 20:28:55.913778  398903 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 20:28:55.914790  398903 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:28:55.914807  398903 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:28:55.914874  398903 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:28:55.922292  398903 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:28:55.922687  398903 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-261311" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.922785  398903 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "functional-261311" cluster setting kubeconfig missing "functional-261311" context setting]
	I1212 20:28:55.923055  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.923461  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.923610  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.924164  398903 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:28:55.924185  398903 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:28:55.924192  398903 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:28:55.924198  398903 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:28:55.924202  398903 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:28:55.924512  398903 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:28:55.924617  398903 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:28:55.932459  398903 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:28:55.932497  398903 kubeadm.go:602] duration metric: took 17.683266ms to restartPrimaryControlPlane
	I1212 20:28:55.932527  398903 kubeadm.go:403] duration metric: took 53.402973ms to StartCluster
	I1212 20:28:55.932549  398903 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.932634  398903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.933272  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.933478  398903 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:28:55.933879  398903 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:28:55.933961  398903 addons.go:70] Setting storage-provisioner=true in profile "functional-261311"
	I1212 20:28:55.933975  398903 addons.go:239] Setting addon storage-provisioner=true in "functional-261311"
	I1212 20:28:55.933999  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.933941  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:55.934065  398903 addons.go:70] Setting default-storageclass=true in profile "functional-261311"
	I1212 20:28:55.934077  398903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-261311"
	I1212 20:28:55.934349  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.934437  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.939847  398903 out.go:179] * Verifying Kubernetes components...
	I1212 20:28:55.942718  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.970904  398903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:28:55.971648  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.971825  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.972098  398903 addons.go:239] Setting addon default-storageclass=true in "functional-261311"
	I1212 20:28:55.972128  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.972592  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.974802  398903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:55.974826  398903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:28:55.974884  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.016147  398903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.016169  398903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:28:56.016234  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.029989  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.052293  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.147892  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:56.182806  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.199875  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:56.957368  398903 node_ready.go:35] waiting up to 6m0s for node "functional-261311" to be "Ready" ...
	I1212 20:28:56.957463  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957488  398903 type.go:168] "Request Body" body=""
	I1212 20:28:56.957545  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1212 20:28:56.957546  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957630  398903 retry.go:31] will retry after 313.594755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957713  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:56.957754  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957788  398903 retry.go:31] will retry after 317.565464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.272396  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.275890  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.344322  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.344435  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.344471  398903 retry.go:31] will retry after 221.297028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351139  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.351181  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351200  398903 retry.go:31] will retry after 309.802672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.458417  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.458511  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.566100  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.625592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.625687  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.625728  398903 retry.go:31] will retry after 499.665469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.661822  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.729487  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.729527  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.729550  398903 retry.go:31] will retry after 503.664724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.958134  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.958421  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.126013  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:58.197757  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.197828  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.197853  398903 retry.go:31] will retry after 1.10540153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.234015  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:58.297441  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.297548  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.297576  398903 retry.go:31] will retry after 1.092264057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:28:58.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:28:59.303542  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:59.364708  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.364773  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.364796  398903 retry.go:31] will retry after 1.503349263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.390910  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:59.449881  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.449970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.450009  398903 retry.go:31] will retry after 1.024940216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.457981  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.458049  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.458335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:59.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.957671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.957942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.457683  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.475497  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:00.543993  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.544048  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.544072  398903 retry.go:31] will retry after 2.24833219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.868438  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:00.926476  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.930138  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.930173  398903 retry.go:31] will retry after 1.556562441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.958315  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.958392  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:00.958787  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:01.458585  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.458668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.458995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:01.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.958122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.457889  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.457969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.458299  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.487755  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:02.545597  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.549667  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.549705  398903 retry.go:31] will retry after 1.726891228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.793114  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:02.856403  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.860058  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.860101  398903 retry.go:31] will retry after 3.686133541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.958383  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.958453  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.958724  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:03.458506  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.458589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.458945  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:03.459000  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:03.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.958210  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.277666  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:04.331675  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:04.335668  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.335700  398903 retry.go:31] will retry after 4.014847664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.457944  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.458019  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.458285  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.457751  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.457828  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.958009  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.958416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:05.958469  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:06.458265  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:06.546991  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:06.607592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:06.607644  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.607664  398903 retry.go:31] will retry after 4.884355554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.958195  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.958538  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.458326  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.458394  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.458746  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.958480  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.958781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:07.958832  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:08.351452  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:08.404529  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:08.407970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.408008  398903 retry.go:31] will retry after 4.723006947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.458208  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.458304  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:08.958349  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.958418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.458637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.458962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.957658  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.958100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:10.458537  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.458602  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.458869  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:10.458910  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:10.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.458416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.492814  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:11.557889  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:11.557940  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.557960  398903 retry.go:31] will retry after 4.177574733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.958412  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.958494  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.958766  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:12.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.458627  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.458916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:12.458972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:12.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.958047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.131713  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:13.192350  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:13.192414  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.192433  398903 retry.go:31] will retry after 8.846505763s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.957726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.457780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.457878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.458172  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.957968  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.958296  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:14.958356  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:15.457665  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.457745  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.458081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:15.737088  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:15.794323  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:15.794363  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.794386  398903 retry.go:31] will retry after 13.823463892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.958001  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.958077  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.958395  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.458178  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.458264  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.458517  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.958364  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:16.958807  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:17.458384  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.458800  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:17.958573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.958679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.958934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:19.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:19.458044  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:19.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.457635  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.458035  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.957568  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.957646  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:21.457974  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.458051  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.458401  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:21.458459  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:21.958216  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.958620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.040027  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:22.098166  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:22.102301  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.102333  398903 retry.go:31] will retry after 9.311877294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.458542  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.458608  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.458864  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.957965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.957780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.957869  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.958143  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:23.958184  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:24.457666  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.457740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:24.957754  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.957831  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.457956  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.958502  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.958583  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:25.958993  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:26.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.458131  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:26.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.957860  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.958177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.457614  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.457693  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.957616  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:28.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.458119  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:28.458170  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:28.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.957713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.457661  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.458113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.618498  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:29.673247  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:29.677091  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.677126  398903 retry.go:31] will retry after 12.247484069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.958487  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.958556  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.958828  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.957764  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:30.958221  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:31.415106  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:31.457708  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.457795  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:31.477657  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:31.481452  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.481486  398903 retry.go:31] will retry after 29.999837192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.958329  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.958678  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.458335  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.458415  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.958367  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.958440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.958702  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:32.958743  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:33.458498  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.458574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.458942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:33.957518  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.957939  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.457617  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.457695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.957613  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:35.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.458075  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:35.458135  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:35.957713  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.457989  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.458070  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.458457  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.958268  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.958361  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.958681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:37.458419  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.458489  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.458760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:37.458803  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:37.958548  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.958989  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.457703  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.457783  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.458130  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.957909  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:39.958142  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:40.458512  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.458875  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:40.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.957663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.957999  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.458005  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.458079  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.458415  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.924900  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:41.958510  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.958584  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.958850  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:41.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:42.001052  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:42.001094  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.001115  398903 retry.go:31] will retry after 30.772279059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.457672  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.457755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.458082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:42.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.458540  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.458610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.458870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.957586  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.958032  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:44.457633  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.457707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.458045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:44.458100  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:44.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.958170  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.457726  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.458152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.957997  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.958445  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:46.458286  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.458355  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.458622  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:46.458663  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:46.958455  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.958553  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.958947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.457794  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.457932  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.458463  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.958292  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.958370  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:48.458483  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.458899  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:48.458971  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:48.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.958090  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.457649  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.457920  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.957681  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.958050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.457756  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.457838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.458163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.957983  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:50.958033  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:51.457978  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.458054  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.458398  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:51.958201  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.958282  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.958598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.458345  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.458418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.958540  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.958883  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:52.958945  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:53.457615  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.457698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:53.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.957674  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.957892  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.958225  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:55.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.457654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.457934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:55.457987  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:55.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.958319  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.458108  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.458185  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.458525  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.958317  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.958572  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:57.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:57.458880  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:57.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.957685  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.457591  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.457943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.957737  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.958104  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.457826  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.457924  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.458273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.958054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:59.958118  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:00.457778  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.457870  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:00.958235  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.958755  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.460861  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.460950  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.461277  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.481640  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:01.559465  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:01.559521  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.559544  398903 retry.go:31] will retry after 33.36515596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.958099  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.958188  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:01.958533  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:02.458305  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.458381  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.458719  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:02.958386  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.958745  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.457694  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:04.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.458056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:04.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:04.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.958103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.457691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.457777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.458124  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.958166  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.958257  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.958561  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:06.458375  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.458451  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.458788  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:06.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:06.957529  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.957955  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.457552  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.457657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.957700  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.957780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.457728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.458065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.957730  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:08.958162  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:09.457851  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.457929  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.458309  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:09.958049  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.958147  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.958566  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.458707  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.958517  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.958916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:10.958976  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:11.457913  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.458009  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.458358  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:11.958078  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.958148  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.958429  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.458295  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.458371  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.458726  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.774318  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:12.840421  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:12.840464  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.840483  398903 retry.go:31] will retry after 30.011296842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.957679  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.957756  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:13.457610  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:13.457978  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:13.957691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.957779  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.958199  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.457821  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.458184  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.958021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:15.457670  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.458088  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:15.458148  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:15.958126  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.958215  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.958644  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.458429  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.458692  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.958433  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.958508  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.958865  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:17.458563  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.458662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.459072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:17.459137  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:17.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.957765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.957740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.958158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.457570  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.457653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.957747  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:19.958157  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:20.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.458135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:20.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.957690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.958023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.458249  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.458570  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.958397  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.958474  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.958860  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:21.958919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:22.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.457650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.457962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:22.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.957818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.958168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:24.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:24.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:24.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.957748  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.958123  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.457534  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.457604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.457872  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.958565  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.958637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.958933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:26.457975  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.458048  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.458392  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:26.458450  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:26.957925  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.957996  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.958288  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.457662  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.458086  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.957807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.957887  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.958218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.957686  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.957778  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.958129  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:28.958185  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:29.457860  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.457948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.458268  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:29.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.957934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.457654  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.957859  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:30.958301  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:31.458270  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.458363  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.458639  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:31.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.958925  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.457675  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.957526  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.957599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.957876  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:33.457638  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:33.458151  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:33.957835  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.957912  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.457709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.458076  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.925852  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:34.958350  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.958426  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.958704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.987024  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990602  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990708  398903 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:35.458275  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.458681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:35.458739  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:35.958407  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.958762  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.457712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.458038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.457790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.957761  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.958213  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:37.958272  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:38.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.458016  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:38.958134  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.958210  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.958478  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.458248  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.458336  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.458729  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.958456  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.958539  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.958888  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:39.958942  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:40.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.457648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.457967  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:40.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.958059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.958252  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.958327  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.958608  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:42.458416  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.458492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.458825  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:42.458889  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:42.852572  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:42.917565  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921658  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921759  398903 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:42.924799  398903 out.go:179] * Enabled addons: 
	I1212 20:30:42.926930  398903 addons.go:530] duration metric: took 1m46.993054127s for enable addons: enabled=[]
	I1212 20:30:42.957819  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.957896  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.958219  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.457528  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.457600  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.458022  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.957587  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.957941  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:44.957982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:45.457697  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.457796  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.458121  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:45.958191  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.958612  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.458444  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.458532  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.957599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.958064  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:46.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:47.457807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.458266  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:47.957963  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.958044  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.958323  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.457878  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.457954  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.458353  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.957937  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.958025  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.958407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:48.958465  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:49.458150  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.458217  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.458483  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:49.958339  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.958422  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.958782  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.457522  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.457619  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:51.457956  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.458033  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.458372  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:51.458436  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:51.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.958760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.458531  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.458606  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.458887  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.957701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.457803  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.457880  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.458232  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.957948  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.958039  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.958314  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:53.958357  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:54.458007  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.458120  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.458562  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:54.957657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.957767  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.958125  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.457599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.457671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.958592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:55.959020  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:56.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.457702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:56.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.957655  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.957949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.457710  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.458063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.958430  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.958528  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.958868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:58.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:58.458062  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:58.957718  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.958154  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.457651  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.957798  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.957888  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.958201  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:00.457692  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.457780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:00.458250  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:00.957940  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.958024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.458223  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.458299  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.458574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.958306  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.958388  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.958736  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:02.458565  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.458645  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.459016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:02.459076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:02.957720  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.457664  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.957853  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.957937  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.958274  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.457595  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.458030  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.957597  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:04.958098  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:05.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.457701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:05.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.957863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.958194  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.458145  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.458228  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.958415  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.958493  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.958820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:06.958879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:07.457506  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.457575  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.457849  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:07.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.957714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.457776  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.457879  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.458223  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.957577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.957652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:09.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.457705  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:09.458076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:09.957794  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.957907  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.958279  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.457971  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.458382  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.958220  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.958714  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:11.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:11.458138  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:11.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.458031  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.957743  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.957841  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:13.458376  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.458443  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.458763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:13.458818  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:13.958577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.958652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.958977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.458101  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.957799  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.957875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.958197  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.457653  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.458080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.958204  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.958537  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:15.958599  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:16.458429  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.458501  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:16.957534  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.957617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.957998  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.457728  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.457806  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.458115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.957591  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:18.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.457847  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.458133  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:18.458180  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:18.957696  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.457727  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.458140  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.957742  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.457686  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.957650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.957923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:20.957972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:21.457915  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.457990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.458320  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:21.958165  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.958276  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.958607  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.458365  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.458716  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.958558  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.958659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.959007  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:22.959071  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:23.457766  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:23.957896  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.957969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.958315  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.457613  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.457714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.958115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:25.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:25.458017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:25.958041  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.958123  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.958512  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.458319  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.458398  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.958549  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.958846  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:27.457587  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.457677  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.457993  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:27.458047  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:27.957637  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.457523  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.457597  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.957667  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.957755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:29.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.458112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:29.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:29.957515  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.957590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.957922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.458057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.957854  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:31.458036  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.458104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:31.458409  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:31.958181  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.958643  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.458473  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.458949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.958012  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.457738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.957824  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.957905  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.958247  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:33.958303  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:34.458003  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.458078  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.458409  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:34.958240  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.958349  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.458572  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.458682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.459077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.958480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.958555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.958847  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:35.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:36.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.458167  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:36.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.957948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.958275  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.457594  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.958057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:38.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:38.458189  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:38.957510  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.957592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.957862  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.457578  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.457664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.957715  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.958106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.457964  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.958114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:40.958173  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:41.457926  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.458028  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.458354  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:41.958180  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.958256  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.958548  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.458349  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.458439  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.458833  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.958514  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.958594  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.958932  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:42.958992  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:43.457618  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.458058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:43.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.958071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.457779  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.457857  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.458177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.957657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:45.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.458010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:45.458070  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:45.957784  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.957877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.458071  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.458414  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.958212  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.958295  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.958642  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:47.458480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.458558  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.458926  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:47.458982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:47.957584  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.957658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.957921  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.457764  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.458171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.957862  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.957972  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.958326  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.458004  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.458083  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.458381  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.958209  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.958290  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.958636  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:49.958695  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:50.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.458818  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:50.957496  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.957563  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.458084  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.957648  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:52.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.457781  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.458111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:52.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:52.957662  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.957750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.457800  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.457898  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.458256  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.958171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:54.958225  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:55.457602  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.457942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:55.957857  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.957935  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.458155  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.458540  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.958285  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.958359  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.958625  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:56.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:57.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.458823  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:57.958474  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.958559  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.457647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.457965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:59.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:59.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:59.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.957976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.457722  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.457811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.458158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.958017  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.958101  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:01.458294  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.458366  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.458700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:01.458759  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:01.958578  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.958660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.959010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.957736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.958135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:03.958124  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:04.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.457689  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:04.957738  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.957816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.457928  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.458292  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.958124  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.958202  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.958466  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:05.958511  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:06.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.458469  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.458820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:06.957560  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.958040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.457620  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.457897  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.957602  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:08.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.458006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:08.458064  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:08.958540  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.958617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.958908  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.457660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.458015  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.958016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.457990  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.958058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:10.958119  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:11.458077  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.458157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:11.958236  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.958308  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.958586  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.458497  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.458856  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.957638  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:13.460759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.460830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.461068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:13.461109  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:13.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.957849  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.958216  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.957890  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.957960  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.958230  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.458122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.957985  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.958378  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:15.958434  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:16.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.458504  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:16.958300  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.958386  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.958758  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.458639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.458986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.957715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.958109  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:18.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.458061  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:18.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:18.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.457938  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.957777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.958136  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.458047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.957741  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.957811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:20.958125  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:21.458048  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.458126  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.458473  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:21.958279  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.458484  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.458765  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.958550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:22.959017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:23.457629  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:23.957725  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.957800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.958134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:25.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:25.458090  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:25.958111  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.958187  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.958536  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.458306  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.458383  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.458747  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.958505  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.958576  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.958841  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:27.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:27.458127  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:27.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.957874  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.958233  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.457931  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.457998  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.458263  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.957554  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.957977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.457711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.957530  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.957906  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:29.957953  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:30.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.458040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:30.957778  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.458073  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.458140  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.458418  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.958203  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.958278  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.958617  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:31.958671  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:32.458448  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.458537  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.458868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:32.957533  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.957933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.458036  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:34.457588  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.457997  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:34.458054  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:34.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.957770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:36.458166  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.458243  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.458598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:36.458654  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:36.958444  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.958533  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.958889  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.458453  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.458552  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.458884  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.957686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.457739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.957536  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.957905  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:38.957951  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:39.457634  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:39.957793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.957878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.458558  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.458626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.458896  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:40.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:41.457917  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.458003  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.458345  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:41.958008  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.958090  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.958391  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.458186  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.458268  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.458645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.958471  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.958551  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.958913  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:42.958969  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:43.457567  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.457639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.457970  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:43.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.958127  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.457848  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.457925  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.458300  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.957921  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.957989  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.958269  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:45.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:45.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:45.957919  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.458249  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.958392  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.958479  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.457637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.457976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.957652  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.957996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:47.958035  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:48.457660  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.458085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:48.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.958068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.457759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.458095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.957718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:49.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:50.457791  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.457875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.458204  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:50.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.957654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.457942  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.458024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.958377  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.958463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.958946  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:51.959008  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:52.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:52.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.457745  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.457818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.458155  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.958157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.958497  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:54.458351  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.458785  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:54.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:54.957837  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.957927  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.958377  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.458049  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.958082  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.958157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.958506  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.458323  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.458789  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.958570  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.958641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.958907  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:56.958949  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:57.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:57.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.457771  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.458182  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.957910  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.957990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.958333  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:59.458167  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.458246  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.458600  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:59.458673  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:59.958419  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.958763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.458626  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.458718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.459178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.957917  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.957999  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.958339  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.458146  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.458227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.458496  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.958324  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:01.958746  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:02.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.458595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.458922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:02.957588  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.457658  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.957689  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.957766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:04.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:04.458057  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:04.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.958097  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.957795  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.957876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:06.458126  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.458201  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.458609  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:06.458666  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:06.958431  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.958510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.958861  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.458432  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.458505  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.958549  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.958631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.958975  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.457744  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.458100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.957714  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.957786  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:08.958096  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:09.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.458145  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:09.957623  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.957707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.457729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.458029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:10.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:11.457959  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.458036  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.458394  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:11.958170  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.958549  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.458358  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.458775  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.957520  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.957604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.957972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:13.458501  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.458572  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.458848  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:13.458891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:13.957574  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.457577  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.457656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.957521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.957928  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.457515  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.457593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.957742  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.957819  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:15.958212  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:16.457912  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.458249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:16.957938  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.958013  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.958371  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.458356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.957551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.957895  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:18.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:18.458060  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:18.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.457757  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.457827  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:20.457628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.458050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:20.458103  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:20.957580  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.457718  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.457793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.458138  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.957933  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.958282  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:22.457957  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.458031  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.458362  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:22.458419  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:22.958162  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.958237  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.958574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.458385  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.458462  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.958452  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.958525  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.958802  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:24.458538  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.458623  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.458972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:24.459028  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:24.957567  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.957987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.957886  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.957967  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.958322  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.458268  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.958389  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.958460  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.958721  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:26.958761  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:27.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.458621  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.458969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:27.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.957682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.958006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.457642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.457915  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.957711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:29.457799  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.457877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.458218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:29.458292  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:29.957566  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.957640  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.957986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.457705  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.457788  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.957840  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.957922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.958258  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:31.458070  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.458149  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.458407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:31.458480  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:31.958244  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.958322  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.958670  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.458475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.458902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.958550  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.457551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.457948  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:33.958117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:34.457524  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.457599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.457902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:34.957627  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.957704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.958079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.457914  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.458250  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.958142  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.958225  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.958508  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:35.958562  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:36.458394  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.458478  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.458822  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:36.957589  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.457586  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.958113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:38.457820  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.458236  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:38.458295  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:38.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.957699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.958001  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.457722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.958083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.457768  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.457840  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.458168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.957758  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:40.958231  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:41.458222  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.458298  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.458630  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:41.958341  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.958427  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.958700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.458591  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.458943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:43.457746  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.457813  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.458089  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:43.458129  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:43.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.957883  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.457980  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.458055  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.458393  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.958151  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.958223  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:45.458269  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.458343  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.458708  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:45.458764  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:45.958513  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.958931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.457565  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.457633  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.957631  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.958128  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.457922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.458245  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.957545  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.957618  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:47.957963  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:48.457643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:48.957629  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.457729  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.458103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.957633  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:49.958114  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:50.457640  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:50.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.458156  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.458244  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.458588  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.958840  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:51.958897  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:52.458422  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.458781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:52.958521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.958596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.958935  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.457641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.457994  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.957675  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.957749  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.958046  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:54.457737  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.457815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.458164  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:54.458229  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:54.957758  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.958073  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.958151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.958481  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:56.458356  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.458518  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.458867  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:56.458919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:56.958475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.958546  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.958806  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.457573  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.457662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.957708  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.958149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.457519  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.457596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.957618  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.957702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:58.958086  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:59.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.457717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.458079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:59.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.957695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.958025  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.457770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.458220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.957723  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.957815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.958152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:00.958209  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:01.458053  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.458124  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.458397  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:01.958241  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.958318  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.458431  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.458517  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.458903  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.958593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.958871  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:02.958913  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:03.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.457665  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.458014  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:03.957750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.958178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.457755  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.458106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.957792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.957872  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.958222  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:05.457932  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.458011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.458316  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:05.458363  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:05.958224  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.958347  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.958674  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.457554  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.457980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.958087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.457764  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.457837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.458126  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:07.958131  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:08.457790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.457867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.458190  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:08.957583  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.958018  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.457986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.957661  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:10.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.458044  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:10.458120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:10.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.958069  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.457925  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.458005  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.458337  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.957987  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.457716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.957844  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.958153  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:12.958206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:13.457572  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.457652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:13.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.957752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.458033  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.957980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:15.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.457800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.458149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:15.458206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.958356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.458302  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.458374  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.458653  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.958451  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.958529  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.958870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.457741  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.957571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.958005  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:17.958058  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:18.457731  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.457820  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.458202  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:18.957933  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.958011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.457582  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.457658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.457973  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.958037  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:19.958084  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:20.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.457726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:20.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.957830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.458132  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.458454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.958169  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.958248  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.958614  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:21.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:22.458387  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.458712  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:22.958495  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.958574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.958894  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.957931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:24.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:24.458117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:24.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.958072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.458023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.958118  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.958454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:26.458388  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.458463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:26.458879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:26.958476  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.958814  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.458579  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.458656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.458987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.957727  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.957802  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.958162  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.458439  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.458510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.458774  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.958512  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.958589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.958911  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:28.958974  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:29.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:29.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.958161  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.457641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.458083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.958024  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:31.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.458012  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.458336  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:31.458388  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:31.958144  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.958581  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.458466  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.458569  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.458930  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.957985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.957814  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.957889  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.958221  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:33.958279  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:34.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.457651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:34.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.957724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.457792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.457876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.958034  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.958104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.958369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:35.958411  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:36.458355  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.458432  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.458815  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:36.957543  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.957626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.957947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.457995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.957635  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:38.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.458116  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:38.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:38.957684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.957762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.957975  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.958305  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.457659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:40.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:41.457945  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.458029  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.458375  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:41.958149  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.958218  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.958489  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.458344  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.458797  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.957548  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.958002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:43.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:43.458139  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:43.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.457863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.458214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.957493  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.957567  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.457549  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.457634  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.957790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.958220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:45.958281  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:46.458047  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.458139  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.458408  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:46.958199  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.958280  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.958672  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.458502  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.458578  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.458923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.957667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.958000  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:48.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:48.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:48.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.457750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.458132  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.957700  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:50.457775  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.457853  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.458187  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:50.458247  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:50.957570  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.957642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.957959  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.457904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.458001  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.458321  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.457677  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.458071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:52.958126  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:53.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:53.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.457816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.458178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.957898  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.958335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:54.958392  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:55.457874  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.457957  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.461901  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:34:55.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.957835  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.958180  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.458205  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.458289  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:56.458646  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.958348  398903 node_ready.go:38] duration metric: took 6m0.000942014s for node "functional-261311" to be "Ready" ...
	I1212 20:34:56.961249  398903 out.go:203] 
	W1212 20:34:56.963984  398903 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:34:56.964005  398903 out.go:285] * 
	W1212 20:34:56.966156  398903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:34:56.969023  398903 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:35:06 functional-261311 crio[5365]: time="2025-12-12T20:35:06.367352643Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=490740ea-6770-4c3b-8f9a-c249ed174965 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.450822832Z" level=info msg="Checking image status: minikube-local-cache-test:functional-261311" id=883d3025-5932-4ce6-ab51-85fab7fe190d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.451041287Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.451096443Z" level=info msg="Image minikube-local-cache-test:functional-261311 not found" id=883d3025-5932-4ce6-ab51-85fab7fe190d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.451186684Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-261311 found" id=883d3025-5932-4ce6-ab51-85fab7fe190d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.477868512Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-261311" id=749663a1-5e4e-4673-a1c1-e95b9bdcf9b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.478016182Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-261311 not found" id=749663a1-5e4e-4673-a1c1-e95b9bdcf9b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.478057478Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-261311 found" id=749663a1-5e4e-4673-a1c1-e95b9bdcf9b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.504304661Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-261311" id=be3c042e-7533-4a9a-8ba2-a3667ea82297 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.504481836Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-261311 not found" id=be3c042e-7533-4a9a-8ba2-a3667ea82297 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.504526735Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-261311 found" id=be3c042e-7533-4a9a-8ba2-a3667ea82297 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.500804411Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=32fd65ec-abb5-48a7-af6b-a0e0059f7b47 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.844040841Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=adbd8d4b-922c-4ca3-93fe-af4324dfaee0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.844230045Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=adbd8d4b-922c-4ca3-93fe-af4324dfaee0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.844289533Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=adbd8d4b-922c-4ca3-93fe-af4324dfaee0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.520024776Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=58aca57d-e555-4df6-85e3-2c89034783c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.520149594Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=58aca57d-e555-4df6-85e3-2c89034783c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.520186829Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=58aca57d-e555-4df6-85e3-2c89034783c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.545215858Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=88deabc7-58ff-44bb-ac34-d06fcc945c15 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.545375826Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=88deabc7-58ff-44bb-ac34-d06fcc945c15 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.545416245Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=88deabc7-58ff-44bb-ac34-d06fcc945c15 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.571808184Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c9b33570-5669-4cae-840b-38259988d85e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.571955123Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c9b33570-5669-4cae-840b-38259988d85e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.571992432Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c9b33570-5669-4cae-840b-38259988d85e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:10 functional-261311 crio[5365]: time="2025-12-12T20:35:10.12697813Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b7030a75-60ee-4337-931a-e0927afb9fdf name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:35:11.702591    9395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:11.703164    9395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:11.705391    9395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:11.705995    9395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:11.707895    9395 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:35:11 up  3:17,  0 user,  load average: 0.91, 0.43, 0.95
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:35:09 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:09 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1153.
	Dec 12 20:35:09 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:09 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:09 functional-261311 kubelet[9262]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:09 functional-261311 kubelet[9262]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:09 functional-261311 kubelet[9262]: E1212 20:35:09.754470    9262 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:09 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:09 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:10 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1154.
	Dec 12 20:35:10 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:10 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:10 functional-261311 kubelet[9290]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:10 functional-261311 kubelet[9290]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:10 functional-261311 kubelet[9290]: E1212 20:35:10.520007    9290 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:10 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:10 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:11 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1155.
	Dec 12 20:35:11 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:11 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:11 functional-261311 kubelet[9311]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:11 functional-261311 kubelet[9311]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:11 functional-261311 kubelet[9311]: E1212 20:35:11.284636    9311 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:11 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:11 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (374.439741ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (2.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-261311 get pods
functional_test.go:756: (dbg) Non-zero exit: out/kubectl --context functional-261311 get pods: exit status 1 (113.895239ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out/kubectl --context functional-261311 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (304.728718ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 logs -n 25: (1.070482577s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205528 image ls --format json --alsologtostderr                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr                                            │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format table --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ delete         │ -p functional-205528                                                                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ start          │ -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ start          │ -p functional-261311 --alsologtostderr -v=8                                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:28 UTC │                     │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:latest                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add minikube-local-cache-test:functional-261311                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache delete minikube-local-cache-test:functional-261311                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl images                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ cache          │ functional-261311 cache reload                                                                                                                    │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ kubectl        │ functional-261311 kubectl -- --context functional-261311 get pods                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:28:51
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:28:51.200639  398903 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:28:51.200813  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.200825  398903 out.go:374] Setting ErrFile to fd 2...
	I1212 20:28:51.200844  398903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:28:51.201121  398903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:28:51.201526  398903 out.go:368] Setting JSON to false
	I1212 20:28:51.202423  398903 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11484,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:28:51.202499  398903 start.go:143] virtualization:  
	I1212 20:28:51.205894  398903 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:28:51.209621  398903 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:28:51.209743  398903 notify.go:221] Checking for updates...
	I1212 20:28:51.215382  398903 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:28:51.218267  398903 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:51.221168  398903 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:28:51.224043  398903 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:28:51.227018  398903 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:28:51.230467  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:51.230581  398903 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:28:51.269738  398903 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:28:51.269857  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.341809  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.330621143 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.341929  398903 docker.go:319] overlay module found
	I1212 20:28:51.347026  398903 out.go:179] * Using the docker driver based on existing profile
	I1212 20:28:51.349898  398903 start.go:309] selected driver: docker
	I1212 20:28:51.349928  398903 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.350015  398903 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:28:51.350136  398903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:28:51.408041  398903 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:28:51.398420734 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:28:51.408534  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:51.408600  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:51.408656  398903 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:51.413511  398903 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:28:51.416491  398903 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:28:51.419403  398903 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:28:51.422306  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:51.422357  398903 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:28:51.422368  398903 cache.go:65] Caching tarball of preloaded images
	I1212 20:28:51.422458  398903 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:28:51.422471  398903 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:28:51.422591  398903 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:28:51.422818  398903 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:28:51.441630  398903 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:28:51.441653  398903 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:28:51.441676  398903 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:28:51.441708  398903 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:28:51.441778  398903 start.go:364] duration metric: took 45.9µs to acquireMachinesLock for "functional-261311"
	I1212 20:28:51.441803  398903 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:28:51.441812  398903 fix.go:54] fixHost starting: 
	I1212 20:28:51.442073  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:51.469956  398903 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:28:51.469989  398903 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:28:51.473238  398903 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:28:51.473304  398903 machine.go:94] provisionDockerMachine start ...
	I1212 20:28:51.473396  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.494630  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.494961  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.494976  398903 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:28:51.648147  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.648174  398903 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:28:51.648237  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.668778  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.669090  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.669106  398903 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:28:51.829776  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:28:51.829853  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:51.848648  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:51.848971  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:51.848987  398903 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:28:52.002627  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:28:52.002659  398903 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:28:52.002689  398903 ubuntu.go:190] setting up certificates
	I1212 20:28:52.002713  398903 provision.go:84] configureAuth start
	I1212 20:28:52.002795  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:52.023958  398903 provision.go:143] copyHostCerts
	I1212 20:28:52.024006  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024050  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:28:52.024064  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:28:52.024145  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:28:52.024243  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024271  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:28:52.024280  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:28:52.024310  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:28:52.024357  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024421  398903 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:28:52.024431  398903 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:28:52.024463  398903 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:28:52.024521  398903 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:28:52.567706  398903 provision.go:177] copyRemoteCerts
	I1212 20:28:52.567776  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:28:52.567821  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.585858  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:52.692768  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:28:52.692828  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:28:52.711466  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:28:52.711534  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:28:52.730742  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:28:52.730815  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:28:52.749109  398903 provision.go:87] duration metric: took 746.363484ms to configureAuth
	I1212 20:28:52.749138  398903 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:28:52.749373  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:52.749480  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:52.767233  398903 main.go:143] libmachine: Using SSH client type: native
	I1212 20:28:52.767548  398903 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:28:52.767570  398903 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:28:53.124031  398903 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:28:53.124063  398903 machine.go:97] duration metric: took 1.650735569s to provisionDockerMachine
	I1212 20:28:53.124076  398903 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:28:53.124090  398903 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:28:53.124184  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:28:53.124249  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.144150  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.248393  398903 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:28:53.251578  398903 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1212 20:28:53.251600  398903 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1212 20:28:53.251605  398903 command_runner.go:130] > VERSION_ID="12"
	I1212 20:28:53.251610  398903 command_runner.go:130] > VERSION="12 (bookworm)"
	I1212 20:28:53.251614  398903 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1212 20:28:53.251618  398903 command_runner.go:130] > ID=debian
	I1212 20:28:53.251623  398903 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1212 20:28:53.251629  398903 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1212 20:28:53.251634  398903 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1212 20:28:53.251713  398903 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:28:53.251736  398903 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:28:53.251748  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:28:53.251809  398903 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:28:53.251889  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:28:53.251900  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:28:53.251976  398903 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:28:53.251984  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> /etc/test/nested/copy/364853/hosts
	I1212 20:28:53.252026  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:28:53.259320  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:53.277130  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:28:53.294238  398903 start.go:296] duration metric: took 170.145848ms for postStartSetup
	I1212 20:28:53.294390  398903 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:28:53.294470  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.312603  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.412930  398903 command_runner.go:130] > 11%
	I1212 20:28:53.413464  398903 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:28:53.417828  398903 command_runner.go:130] > 174G
	I1212 20:28:53.418334  398903 fix.go:56] duration metric: took 1.976518079s for fixHost
	I1212 20:28:53.418383  398903 start.go:83] releasing machines lock for "functional-261311", held for 1.976583573s
	I1212 20:28:53.418465  398903 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:28:53.435134  398903 ssh_runner.go:195] Run: cat /version.json
	I1212 20:28:53.435190  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.435445  398903 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:28:53.435511  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:53.452987  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.462005  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:53.555880  398903 command_runner.go:130] > {"iso_version": "v1.37.0-1765481609-22101", "kicbase_version": "v0.0.48-1765505794-22112", "minikube_version": "v1.37.0", "commit": "2e51b54b5cee5d454381ac23cfe3d8d395879671"}
	I1212 20:28:53.556060  398903 ssh_runner.go:195] Run: systemctl --version
	I1212 20:28:53.643428  398903 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1212 20:28:53.646219  398903 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1212 20:28:53.646272  398903 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1212 20:28:53.646362  398903 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:28:53.685489  398903 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1212 20:28:53.690919  398903 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1212 20:28:53.690960  398903 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:28:53.691016  398903 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:28:53.699790  398903 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:28:53.699851  398903 start.go:496] detecting cgroup driver to use...
	I1212 20:28:53.699883  398903 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:28:53.699937  398903 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:28:53.716256  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:28:53.731380  398903 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:28:53.731442  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:28:53.747947  398903 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:28:53.763704  398903 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:28:53.877723  398903 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:28:53.997385  398903 docker.go:234] disabling docker service ...
	I1212 20:28:53.997457  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:28:54.016313  398903 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:28:54.032112  398903 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:28:54.157667  398903 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:28:54.273189  398903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:28:54.288211  398903 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:28:54.301284  398903 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1212 20:28:54.302509  398903 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:28:54.302613  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.311343  398903 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:28:54.311460  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.320776  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.330058  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.340191  398903 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:28:54.348326  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.357164  398903 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.365464  398903 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.374528  398903 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:28:54.381778  398903 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1212 20:28:54.382795  398903 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:28:54.390360  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:54.529224  398903 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:28:54.703666  398903 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:28:54.703740  398903 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:28:54.707780  398903 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1212 20:28:54.707808  398903 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1212 20:28:54.707826  398903 command_runner.go:130] > Device: 0,72	Inode: 1640        Links: 1
	I1212 20:28:54.707834  398903 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:54.707840  398903 command_runner.go:130] > Access: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707850  398903 command_runner.go:130] > Modify: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707858  398903 command_runner.go:130] > Change: 2025-12-12 20:28:54.648002637 +0000
	I1212 20:28:54.707861  398903 command_runner.go:130] >  Birth: -
	I1212 20:28:54.707934  398903 start.go:564] Will wait 60s for crictl version
	I1212 20:28:54.708017  398903 ssh_runner.go:195] Run: which crictl
	I1212 20:28:54.711729  398903 command_runner.go:130] > /usr/local/bin/crictl
	I1212 20:28:54.711909  398903 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:28:54.737852  398903 command_runner.go:130] > Version:  0.1.0
	I1212 20:28:54.737888  398903 command_runner.go:130] > RuntimeName:  cri-o
	I1212 20:28:54.737895  398903 command_runner.go:130] > RuntimeVersion:  1.34.3
	I1212 20:28:54.737901  398903 command_runner.go:130] > RuntimeApiVersion:  v1
	I1212 20:28:54.740042  398903 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:28:54.740184  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.769676  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.769713  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.769720  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.769725  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.769750  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.769764  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.769768  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.769788  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.769802  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.769806  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.769810  398903 command_runner.go:130] >      static
	I1212 20:28:54.769813  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.769832  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.769838  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.769842  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.769849  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.769852  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.769859  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.769867  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.769872  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.769969  398903 ssh_runner.go:195] Run: crio --version
	I1212 20:28:54.796781  398903 command_runner.go:130] > crio version 1.34.3
	I1212 20:28:54.796850  398903 command_runner.go:130] >    GitCommit:      067a88aedf5d7c658a2acb81afe82d6c3a367a52
	I1212 20:28:54.796873  398903 command_runner.go:130] >    GitCommitDate:  2025-12-01T16:44:09Z
	I1212 20:28:54.796896  398903 command_runner.go:130] >    GitTreeState:   dirty
	I1212 20:28:54.796933  398903 command_runner.go:130] >    BuildDate:      1970-01-01T00:00:00Z
	I1212 20:28:54.796961  398903 command_runner.go:130] >    GoVersion:      go1.24.6
	I1212 20:28:54.796982  398903 command_runner.go:130] >    Compiler:       gc
	I1212 20:28:54.797005  398903 command_runner.go:130] >    Platform:       linux/arm64
	I1212 20:28:54.797036  398903 command_runner.go:130] >    Linkmode:       static
	I1212 20:28:54.797055  398903 command_runner.go:130] >    BuildTags:
	I1212 20:28:54.797071  398903 command_runner.go:130] >      static
	I1212 20:28:54.797089  398903 command_runner.go:130] >      netgo
	I1212 20:28:54.797108  398903 command_runner.go:130] >      osusergo
	I1212 20:28:54.797151  398903 command_runner.go:130] >      exclude_graphdriver_btrfs
	I1212 20:28:54.797177  398903 command_runner.go:130] >      seccomp
	I1212 20:28:54.797197  398903 command_runner.go:130] >      apparmor
	I1212 20:28:54.797231  398903 command_runner.go:130] >      selinux
	I1212 20:28:54.797262  398903 command_runner.go:130] >    LDFlags:          unknown
	I1212 20:28:54.797290  398903 command_runner.go:130] >    SeccompEnabled:   true
	I1212 20:28:54.797309  398903 command_runner.go:130] >    AppArmorEnabled:  false
	I1212 20:28:54.804038  398903 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:28:54.806949  398903 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:28:54.823441  398903 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:28:54.827623  398903 command_runner.go:130] > 192.168.49.1	host.minikube.internal
	I1212 20:28:54.827865  398903 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:28:54.827977  398903 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:28:54.828031  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.860175  398903 command_runner.go:130] > {
	I1212 20:28:54.860197  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.860201  398903 command_runner.go:130] >     {
	I1212 20:28:54.860214  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.860219  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860225  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.860229  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860233  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860242  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.860250  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.860254  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860258  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.860263  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860270  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860274  398903 command_runner.go:130] >     },
	I1212 20:28:54.860277  398903 command_runner.go:130] >     {
	I1212 20:28:54.860285  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.860289  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860295  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.860298  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860302  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860310  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.860333  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.860341  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860346  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.860350  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860357  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860360  398903 command_runner.go:130] >     },
	I1212 20:28:54.860363  398903 command_runner.go:130] >     {
	I1212 20:28:54.860391  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.860396  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860401  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.860404  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860408  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860417  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.860425  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.860428  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860434  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.860439  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.860443  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860447  398903 command_runner.go:130] >     },
	I1212 20:28:54.860456  398903 command_runner.go:130] >     {
	I1212 20:28:54.860463  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.860467  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860472  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.860478  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860482  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860490  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.860497  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.860505  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860510  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.860513  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860517  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860521  398903 command_runner.go:130] >       },
	I1212 20:28:54.860530  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860534  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860540  398903 command_runner.go:130] >     },
	I1212 20:28:54.860546  398903 command_runner.go:130] >     {
	I1212 20:28:54.860552  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.860558  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860564  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.860567  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860577  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860594  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.860603  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.860610  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860614  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.860618  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860622  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860625  398903 command_runner.go:130] >       },
	I1212 20:28:54.860630  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860636  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860639  398903 command_runner.go:130] >     },
	I1212 20:28:54.860643  398903 command_runner.go:130] >     {
	I1212 20:28:54.860652  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.860659  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860665  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.860668  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860672  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860684  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.860695  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.860698  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860702  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.860706  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860711  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860717  398903 command_runner.go:130] >       },
	I1212 20:28:54.860721  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860726  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860739  398903 command_runner.go:130] >     },
	I1212 20:28:54.860747  398903 command_runner.go:130] >     {
	I1212 20:28:54.860754  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.860760  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860766  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.860769  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860773  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860781  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.860792  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.860796  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860801  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.860807  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860811  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860817  398903 command_runner.go:130] >     },
	I1212 20:28:54.860820  398903 command_runner.go:130] >     {
	I1212 20:28:54.860827  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.860831  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860839  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.860844  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860854  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860863  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.860876  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.860883  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860887  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.860891  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860895  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.860905  398903 command_runner.go:130] >       },
	I1212 20:28:54.860908  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.860912  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.860922  398903 command_runner.go:130] >     },
	I1212 20:28:54.860925  398903 command_runner.go:130] >     {
	I1212 20:28:54.860932  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.860938  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.860944  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.860948  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860953  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.860961  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.860971  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.860975  398903 command_runner.go:130] >       ],
	I1212 20:28:54.860979  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.860984  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.860991  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.860994  398903 command_runner.go:130] >       },
	I1212 20:28:54.861000  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.861004  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.861014  398903 command_runner.go:130] >     }
	I1212 20:28:54.861017  398903 command_runner.go:130] >   ]
	I1212 20:28:54.861020  398903 command_runner.go:130] > }
	I1212 20:28:54.861204  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.861218  398903 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:28:54.861275  398903 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:28:54.883482  398903 command_runner.go:130] > {
	I1212 20:28:54.883501  398903 command_runner.go:130] >   "images":  [
	I1212 20:28:54.883506  398903 command_runner.go:130] >     {
	I1212 20:28:54.883514  398903 command_runner.go:130] >       "id":  "b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c",
	I1212 20:28:54.883520  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883526  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20250512-df8de77b"
	I1212 20:28:54.883529  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883533  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883547  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a",
	I1212 20:28:54.883556  398903 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"
	I1212 20:28:54.883560  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883564  398903 command_runner.go:130] >       "size":  "111333938",
	I1212 20:28:54.883568  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883574  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883577  398903 command_runner.go:130] >     },
	I1212 20:28:54.883580  398903 command_runner.go:130] >     {
	I1212 20:28:54.883587  398903 command_runner.go:130] >       "id":  "ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6",
	I1212 20:28:54.883591  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883597  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1212 20:28:54.883600  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883604  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883612  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2",
	I1212 20:28:54.883620  398903 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"
	I1212 20:28:54.883624  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883628  398903 command_runner.go:130] >       "size":  "29037500",
	I1212 20:28:54.883632  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883638  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883641  398903 command_runner.go:130] >     },
	I1212 20:28:54.883645  398903 command_runner.go:130] >     {
	I1212 20:28:54.883652  398903 command_runner.go:130] >       "id":  "e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf",
	I1212 20:28:54.883656  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883663  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.13.1"
	I1212 20:28:54.883666  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883670  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883679  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6",
	I1212 20:28:54.883687  398903 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"
	I1212 20:28:54.883690  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883695  398903 command_runner.go:130] >       "size":  "74491780",
	I1212 20:28:54.883699  398903 command_runner.go:130] >       "username":  "nonroot",
	I1212 20:28:54.883702  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883706  398903 command_runner.go:130] >     },
	I1212 20:28:54.883712  398903 command_runner.go:130] >     {
	I1212 20:28:54.883719  398903 command_runner.go:130] >       "id":  "2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42",
	I1212 20:28:54.883723  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883728  398903 command_runner.go:130] >         "registry.k8s.io/etcd:3.6.5-0"
	I1212 20:28:54.883733  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883737  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883745  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534",
	I1212 20:28:54.883752  398903 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"
	I1212 20:28:54.883756  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883759  398903 command_runner.go:130] >       "size":  "60857170",
	I1212 20:28:54.883763  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883767  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883770  398903 command_runner.go:130] >       },
	I1212 20:28:54.883778  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883783  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883786  398903 command_runner.go:130] >     },
	I1212 20:28:54.883788  398903 command_runner.go:130] >     {
	I1212 20:28:54.883795  398903 command_runner.go:130] >       "id":  "ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4",
	I1212 20:28:54.883798  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883804  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.35.0-beta.0"
	I1212 20:28:54.883807  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883811  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883819  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58",
	I1212 20:28:54.883827  398903 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"
	I1212 20:28:54.883830  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883834  398903 command_runner.go:130] >       "size":  "84949999",
	I1212 20:28:54.883838  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883842  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883845  398903 command_runner.go:130] >       },
	I1212 20:28:54.883854  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883858  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883861  398903 command_runner.go:130] >     },
	I1212 20:28:54.883864  398903 command_runner.go:130] >     {
	I1212 20:28:54.883874  398903 command_runner.go:130] >       "id":  "68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be",
	I1212 20:28:54.883878  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883884  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"
	I1212 20:28:54.883888  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883891  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883899  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d",
	I1212 20:28:54.883908  398903 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"
	I1212 20:28:54.883911  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883915  398903 command_runner.go:130] >       "size":  "72170325",
	I1212 20:28:54.883919  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.883923  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.883926  398903 command_runner.go:130] >       },
	I1212 20:28:54.883930  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883935  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883938  398903 command_runner.go:130] >     },
	I1212 20:28:54.883942  398903 command_runner.go:130] >     {
	I1212 20:28:54.883949  398903 command_runner.go:130] >       "id":  "404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904",
	I1212 20:28:54.883952  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.883958  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.35.0-beta.0"
	I1212 20:28:54.883961  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883965  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.883973  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478",
	I1212 20:28:54.883981  398903 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"
	I1212 20:28:54.883983  398903 command_runner.go:130] >       ],
	I1212 20:28:54.883988  398903 command_runner.go:130] >       "size":  "74106775",
	I1212 20:28:54.883991  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.883995  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.883999  398903 command_runner.go:130] >     },
	I1212 20:28:54.884002  398903 command_runner.go:130] >     {
	I1212 20:28:54.884008  398903 command_runner.go:130] >       "id":  "16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b",
	I1212 20:28:54.884012  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884017  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.35.0-beta.0"
	I1212 20:28:54.884020  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884030  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884038  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6",
	I1212 20:28:54.884055  398903 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"
	I1212 20:28:54.884061  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884064  398903 command_runner.go:130] >       "size":  "49822549",
	I1212 20:28:54.884068  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884072  398903 command_runner.go:130] >         "value":  "0"
	I1212 20:28:54.884075  398903 command_runner.go:130] >       },
	I1212 20:28:54.884079  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884082  398903 command_runner.go:130] >       "pinned":  false
	I1212 20:28:54.884085  398903 command_runner.go:130] >     },
	I1212 20:28:54.884088  398903 command_runner.go:130] >     {
	I1212 20:28:54.884095  398903 command_runner.go:130] >       "id":  "d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd",
	I1212 20:28:54.884099  398903 command_runner.go:130] >       "repoTags":  [
	I1212 20:28:54.884103  398903 command_runner.go:130] >         "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.884106  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884110  398903 command_runner.go:130] >       "repoDigests":  [
	I1212 20:28:54.884118  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c",
	I1212 20:28:54.884125  398903 command_runner.go:130] >         "registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"
	I1212 20:28:54.884129  398903 command_runner.go:130] >       ],
	I1212 20:28:54.884133  398903 command_runner.go:130] >       "size":  "519884",
	I1212 20:28:54.884137  398903 command_runner.go:130] >       "uid":  {
	I1212 20:28:54.884141  398903 command_runner.go:130] >         "value":  "65535"
	I1212 20:28:54.884145  398903 command_runner.go:130] >       },
	I1212 20:28:54.884149  398903 command_runner.go:130] >       "username":  "",
	I1212 20:28:54.884152  398903 command_runner.go:130] >       "pinned":  true
	I1212 20:28:54.884155  398903 command_runner.go:130] >     }
	I1212 20:28:54.884158  398903 command_runner.go:130] >   ]
	I1212 20:28:54.884161  398903 command_runner.go:130] > }
	I1212 20:28:54.885632  398903 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:28:54.885655  398903 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:28:54.885663  398903 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:28:54.885778  398903 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:28:54.885868  398903 ssh_runner.go:195] Run: crio config
	I1212 20:28:54.934221  398903 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1212 20:28:54.934247  398903 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1212 20:28:54.934255  398903 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1212 20:28:54.934259  398903 command_runner.go:130] > #
	I1212 20:28:54.934288  398903 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1212 20:28:54.934303  398903 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1212 20:28:54.934310  398903 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1212 20:28:54.934320  398903 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1212 20:28:54.934324  398903 command_runner.go:130] > # reload'.
	I1212 20:28:54.934331  398903 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1212 20:28:54.934341  398903 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1212 20:28:54.934347  398903 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1212 20:28:54.934369  398903 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1212 20:28:54.934379  398903 command_runner.go:130] > [crio]
	I1212 20:28:54.934386  398903 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1212 20:28:54.934403  398903 command_runner.go:130] > # containers images, in this directory.
	I1212 20:28:54.934708  398903 command_runner.go:130] > # root = "/home/docker/.local/share/containers/storage"
	I1212 20:28:54.934725  398903 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1212 20:28:54.935118  398903 command_runner.go:130] > # runroot = "/tmp/storage-run-1000/containers"
	I1212 20:28:54.935167  398903 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1212 20:28:54.935270  398903 command_runner.go:130] > # imagestore = ""
	I1212 20:28:54.935280  398903 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1212 20:28:54.935288  398903 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1212 20:28:54.935534  398903 command_runner.go:130] > # storage_driver = "overlay"
	I1212 20:28:54.935547  398903 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1212 20:28:54.935554  398903 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1212 20:28:54.935682  398903 command_runner.go:130] > # storage_option = [
	I1212 20:28:54.935790  398903 command_runner.go:130] > # ]
	I1212 20:28:54.935801  398903 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1212 20:28:54.935808  398903 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1212 20:28:54.935977  398903 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1212 20:28:54.935987  398903 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1212 20:28:54.936004  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1212 20:28:54.936009  398903 command_runner.go:130] > # always happen on a node reboot
	I1212 20:28:54.936228  398903 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1212 20:28:54.936250  398903 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1212 20:28:54.936257  398903 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1212 20:28:54.936263  398903 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1212 20:28:54.936389  398903 command_runner.go:130] > # version_file_persist = ""
	I1212 20:28:54.936402  398903 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1212 20:28:54.936411  398903 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1212 20:28:54.937698  398903 command_runner.go:130] > # internal_wipe = true
	I1212 20:28:54.937721  398903 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1212 20:28:54.937728  398903 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1212 20:28:54.937860  398903 command_runner.go:130] > # internal_repair = true
	I1212 20:28:54.937871  398903 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1212 20:28:54.937878  398903 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1212 20:28:54.937885  398903 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1212 20:28:54.938097  398903 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1212 20:28:54.938132  398903 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1212 20:28:54.938152  398903 command_runner.go:130] > [crio.api]
	I1212 20:28:54.938172  398903 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1212 20:28:54.938284  398903 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1212 20:28:54.938314  398903 command_runner.go:130] > # IP address on which the stream server will listen.
	I1212 20:28:54.938521  398903 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1212 20:28:54.938555  398903 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1212 20:28:54.938577  398903 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1212 20:28:54.938680  398903 command_runner.go:130] > # stream_port = "0"
	I1212 20:28:54.938717  398903 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1212 20:28:54.938951  398903 command_runner.go:130] > # stream_enable_tls = false
	I1212 20:28:54.938995  398903 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1212 20:28:54.939084  398903 command_runner.go:130] > # stream_idle_timeout = ""
	I1212 20:28:54.939113  398903 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1212 20:28:54.939142  398903 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939249  398903 command_runner.go:130] > # stream_tls_cert = ""
	I1212 20:28:54.939291  398903 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1212 20:28:54.939312  398903 command_runner.go:130] > # change and CRI-O will automatically pick up the changes.
	I1212 20:28:54.939622  398903 command_runner.go:130] > # stream_tls_key = ""
	I1212 20:28:54.939657  398903 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1212 20:28:54.939704  398903 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1212 20:28:54.939736  398903 command_runner.go:130] > # automatically pick up the changes.
	I1212 20:28:54.939811  398903 command_runner.go:130] > # stream_tls_ca = ""
	I1212 20:28:54.939858  398903 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940308  398903 command_runner.go:130] > # grpc_max_send_msg_size = 83886080
	I1212 20:28:54.940353  398903 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1212 20:28:54.940776  398903 command_runner.go:130] > # grpc_max_recv_msg_size = 83886080
	I1212 20:28:54.940788  398903 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1212 20:28:54.940801  398903 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1212 20:28:54.940806  398903 command_runner.go:130] > [crio.runtime]
	I1212 20:28:54.940824  398903 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1212 20:28:54.940830  398903 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1212 20:28:54.940834  398903 command_runner.go:130] > # "nofile=1024:2048"
	I1212 20:28:54.940840  398903 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1212 20:28:54.940969  398903 command_runner.go:130] > # default_ulimits = [
	I1212 20:28:54.941191  398903 command_runner.go:130] > # ]
	I1212 20:28:54.941204  398903 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1212 20:28:54.941558  398903 command_runner.go:130] > # no_pivot = false
	I1212 20:28:54.941568  398903 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1212 20:28:54.941575  398903 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1212 20:28:54.941945  398903 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1212 20:28:54.941956  398903 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1212 20:28:54.941961  398903 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1212 20:28:54.942013  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942279  398903 command_runner.go:130] > # conmon = ""
	I1212 20:28:54.942287  398903 command_runner.go:130] > # Cgroup setting for conmon
	I1212 20:28:54.942295  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1212 20:28:54.942500  398903 command_runner.go:130] > conmon_cgroup = "pod"
	I1212 20:28:54.942511  398903 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1212 20:28:54.942545  398903 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1212 20:28:54.942582  398903 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1212 20:28:54.942706  398903 command_runner.go:130] > # conmon_env = [
	I1212 20:28:54.942961  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943022  398903 command_runner.go:130] > # Additional environment variables to set for all the
	I1212 20:28:54.943043  398903 command_runner.go:130] > # containers. These are overridden if set in the
	I1212 20:28:54.943084  398903 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1212 20:28:54.943203  398903 command_runner.go:130] > # default_env = [
	I1212 20:28:54.943456  398903 command_runner.go:130] > # ]
	I1212 20:28:54.943514  398903 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1212 20:28:54.943537  398903 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1212 20:28:54.943931  398903 command_runner.go:130] > # selinux = false
	I1212 20:28:54.943943  398903 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1212 20:28:54.943997  398903 command_runner.go:130] > # for the runtime. If not specified or set to "", then the internal default seccomp profile will be used.
	I1212 20:28:54.944007  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944219  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.944231  398903 command_runner.go:130] > # Enable a seccomp profile for privileged containers from the local path.
	I1212 20:28:54.944237  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944517  398903 command_runner.go:130] > # privileged_seccomp_profile = ""
	I1212 20:28:54.944529  398903 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1212 20:28:54.944536  398903 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1212 20:28:54.944595  398903 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1212 20:28:54.944603  398903 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1212 20:28:54.944609  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.944908  398903 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1212 20:28:54.944919  398903 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1212 20:28:54.944924  398903 command_runner.go:130] > # the cgroup blockio controller.
	I1212 20:28:54.945253  398903 command_runner.go:130] > # blockio_config_file = ""
	I1212 20:28:54.945265  398903 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1212 20:28:54.945309  398903 command_runner.go:130] > # blockio parameters.
	I1212 20:28:54.945663  398903 command_runner.go:130] > # blockio_reload = false
	I1212 20:28:54.945676  398903 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1212 20:28:54.945725  398903 command_runner.go:130] > # irqbalance daemon.
	I1212 20:28:54.946100  398903 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1212 20:28:54.946111  398903 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1212 20:28:54.946174  398903 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1212 20:28:54.946186  398903 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1212 20:28:54.946547  398903 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1212 20:28:54.946561  398903 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1212 20:28:54.946567  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.946867  398903 command_runner.go:130] > # rdt_config_file = ""
	I1212 20:28:54.946878  398903 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1212 20:28:54.947089  398903 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1212 20:28:54.947100  398903 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1212 20:28:54.947442  398903 command_runner.go:130] > # separate_pull_cgroup = ""
	I1212 20:28:54.947454  398903 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1212 20:28:54.947513  398903 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1212 20:28:54.947527  398903 command_runner.go:130] > # will be added.
	I1212 20:28:54.947601  398903 command_runner.go:130] > # default_capabilities = [
	I1212 20:28:54.947867  398903 command_runner.go:130] > # 	"CHOWN",
	I1212 20:28:54.948094  398903 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1212 20:28:54.948277  398903 command_runner.go:130] > # 	"FSETID",
	I1212 20:28:54.948500  398903 command_runner.go:130] > # 	"FOWNER",
	I1212 20:28:54.948701  398903 command_runner.go:130] > # 	"SETGID",
	I1212 20:28:54.948883  398903 command_runner.go:130] > # 	"SETUID",
	I1212 20:28:54.949109  398903 command_runner.go:130] > # 	"SETPCAP",
	I1212 20:28:54.949307  398903 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1212 20:28:54.949502  398903 command_runner.go:130] > # 	"KILL",
	I1212 20:28:54.949671  398903 command_runner.go:130] > # ]
	I1212 20:28:54.949741  398903 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1212 20:28:54.949814  398903 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1212 20:28:54.950073  398903 command_runner.go:130] > # add_inheritable_capabilities = false
	I1212 20:28:54.950143  398903 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1212 20:28:54.950211  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.950289  398903 command_runner.go:130] > default_sysctls = [
	I1212 20:28:54.950330  398903 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1212 20:28:54.950370  398903 command_runner.go:130] > ]
	I1212 20:28:54.950439  398903 command_runner.go:130] > # List of devices on the host that a
	I1212 20:28:54.950465  398903 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1212 20:28:54.950518  398903 command_runner.go:130] > # allowed_devices = [
	I1212 20:28:54.950672  398903 command_runner.go:130] > # 	"/dev/fuse",
	I1212 20:28:54.950902  398903 command_runner.go:130] > # 	"/dev/net/tun",
	I1212 20:28:54.951150  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951221  398903 command_runner.go:130] > # List of additional devices. specified as
	I1212 20:28:54.951244  398903 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1212 20:28:54.951280  398903 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1212 20:28:54.951306  398903 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1212 20:28:54.951324  398903 command_runner.go:130] > # additional_devices = [
	I1212 20:28:54.951343  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951424  398903 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1212 20:28:54.951503  398903 command_runner.go:130] > # cdi_spec_dirs = [
	I1212 20:28:54.951521  398903 command_runner.go:130] > # 	"/etc/cdi",
	I1212 20:28:54.951592  398903 command_runner.go:130] > # 	"/var/run/cdi",
	I1212 20:28:54.951609  398903 command_runner.go:130] > # ]
	I1212 20:28:54.951651  398903 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1212 20:28:54.951672  398903 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1212 20:28:54.951689  398903 command_runner.go:130] > # Defaults to false.
	I1212 20:28:54.951751  398903 command_runner.go:130] > # device_ownership_from_security_context = false
	I1212 20:28:54.951809  398903 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1212 20:28:54.951879  398903 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1212 20:28:54.951906  398903 command_runner.go:130] > # hooks_dir = [
	I1212 20:28:54.951934  398903 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1212 20:28:54.951952  398903 command_runner.go:130] > # ]
	I1212 20:28:54.952010  398903 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1212 20:28:54.952049  398903 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1212 20:28:54.952097  398903 command_runner.go:130] > # its default mounts from the following two files:
	I1212 20:28:54.952138  398903 command_runner.go:130] > #
	I1212 20:28:54.952160  398903 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1212 20:28:54.952191  398903 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1212 20:28:54.952262  398903 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1212 20:28:54.952281  398903 command_runner.go:130] > #
	I1212 20:28:54.952324  398903 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1212 20:28:54.952346  398903 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1212 20:28:54.952404  398903 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1212 20:28:54.952491  398903 command_runner.go:130] > #      only add mounts it finds in this file.
	I1212 20:28:54.952529  398903 command_runner.go:130] > #
	I1212 20:28:54.952568  398903 command_runner.go:130] > # default_mounts_file = ""
	I1212 20:28:54.952602  398903 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1212 20:28:54.952623  398903 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1212 20:28:54.952643  398903 command_runner.go:130] > # pids_limit = -1
	I1212 20:28:54.952677  398903 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1212 20:28:54.952708  398903 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1212 20:28:54.952837  398903 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1212 20:28:54.952892  398903 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1212 20:28:54.952911  398903 command_runner.go:130] > # log_size_max = -1
	I1212 20:28:54.952955  398903 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1212 20:28:54.953009  398903 command_runner.go:130] > # log_to_journald = false
	I1212 20:28:54.953062  398903 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1212 20:28:54.953088  398903 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1212 20:28:54.953123  398903 command_runner.go:130] > # Path to directory for container attach sockets.
	I1212 20:28:54.953149  398903 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1212 20:28:54.953170  398903 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1212 20:28:54.953206  398903 command_runner.go:130] > # bind_mount_prefix = ""
	I1212 20:28:54.953299  398903 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1212 20:28:54.953339  398903 command_runner.go:130] > # read_only = false
	I1212 20:28:54.953359  398903 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1212 20:28:54.953395  398903 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1212 20:28:54.953418  398903 command_runner.go:130] > # live configuration reload.
	I1212 20:28:54.953436  398903 command_runner.go:130] > # log_level = "info"
	I1212 20:28:54.953472  398903 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1212 20:28:54.953562  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.953601  398903 command_runner.go:130] > # log_filter = ""
	I1212 20:28:54.953622  398903 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953643  398903 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1212 20:28:54.953675  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953712  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953763  398903 command_runner.go:130] > # uid_mappings = ""
	I1212 20:28:54.953804  398903 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1212 20:28:54.953825  398903 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1212 20:28:54.953843  398903 command_runner.go:130] > # separated by comma.
	I1212 20:28:54.953907  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.953931  398903 command_runner.go:130] > # gid_mappings = ""
	I1212 20:28:54.953969  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1212 20:28:54.954021  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954062  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954085  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954103  398903 command_runner.go:130] > # minimum_mappable_uid = -1
	I1212 20:28:54.954162  398903 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1212 20:28:54.954184  398903 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1212 20:28:54.954234  398903 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1212 20:28:54.954322  398903 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1212 20:28:54.954363  398903 command_runner.go:130] > # minimum_mappable_gid = -1
	I1212 20:28:54.954382  398903 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1212 20:28:54.954423  398903 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1212 20:28:54.954443  398903 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1212 20:28:54.954461  398903 command_runner.go:130] > # ctr_stop_timeout = 30
	I1212 20:28:54.954533  398903 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1212 20:28:54.954586  398903 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1212 20:28:54.954623  398903 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1212 20:28:54.954643  398903 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1212 20:28:54.954683  398903 command_runner.go:130] > # drop_infra_ctr = true
	I1212 20:28:54.954704  398903 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1212 20:28:54.954737  398903 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1212 20:28:54.954797  398903 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1212 20:28:54.954876  398903 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1212 20:28:54.954917  398903 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1212 20:28:54.954947  398903 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1212 20:28:54.954967  398903 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1212 20:28:54.955001  398903 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1212 20:28:54.955088  398903 command_runner.go:130] > # shared_cpuset = ""
	I1212 20:28:54.955124  398903 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1212 20:28:54.955160  398903 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1212 20:28:54.955179  398903 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1212 20:28:54.955201  398903 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1212 20:28:54.955242  398903 command_runner.go:130] > # pinns_path = ""
	I1212 20:28:54.955301  398903 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1212 20:28:54.955365  398903 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1212 20:28:54.955383  398903 command_runner.go:130] > # enable_criu_support = true
	I1212 20:28:54.955425  398903 command_runner.go:130] > # Enable/disable the generation of the container,
	I1212 20:28:54.955447  398903 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1212 20:28:54.955466  398903 command_runner.go:130] > # enable_pod_events = false
	I1212 20:28:54.955506  398903 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1212 20:28:54.955594  398903 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1212 20:28:54.955624  398903 command_runner.go:130] > # default_runtime = "crun"
	I1212 20:28:54.955661  398903 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1212 20:28:54.955697  398903 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1212 20:28:54.955721  398903 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1212 20:28:54.955790  398903 command_runner.go:130] > # creation as a file is not desired either.
	I1212 20:28:54.955868  398903 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1212 20:28:54.955891  398903 command_runner.go:130] > # the hostname is being managed dynamically.
	I1212 20:28:54.955927  398903 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1212 20:28:54.955946  398903 command_runner.go:130] > # ]
	I1212 20:28:54.955966  398903 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1212 20:28:54.956007  398903 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1212 20:28:54.956057  398903 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1212 20:28:54.956117  398903 command_runner.go:130] > # Each entry in the table should follow the format:
	I1212 20:28:54.956136  398903 command_runner.go:130] > #
	I1212 20:28:54.956299  398903 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1212 20:28:54.956391  398903 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1212 20:28:54.956423  398903 command_runner.go:130] > # runtime_type = "oci"
	I1212 20:28:54.956443  398903 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1212 20:28:54.956476  398903 command_runner.go:130] > # inherit_default_runtime = false
	I1212 20:28:54.956515  398903 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1212 20:28:54.956535  398903 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1212 20:28:54.956555  398903 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1212 20:28:54.956602  398903 command_runner.go:130] > # monitor_env = []
	I1212 20:28:54.956632  398903 command_runner.go:130] > # privileged_without_host_devices = false
	I1212 20:28:54.956651  398903 command_runner.go:130] > # allowed_annotations = []
	I1212 20:28:54.956673  398903 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1212 20:28:54.956703  398903 command_runner.go:130] > # no_sync_log = false
	I1212 20:28:54.956730  398903 command_runner.go:130] > # default_annotations = {}
	I1212 20:28:54.956749  398903 command_runner.go:130] > # stream_websockets = false
	I1212 20:28:54.956770  398903 command_runner.go:130] > # seccomp_profile = ""
	I1212 20:28:54.956828  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.956858  398903 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1212 20:28:54.956879  398903 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1212 20:28:54.956902  398903 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1212 20:28:54.956934  398903 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1212 20:28:54.956956  398903 command_runner.go:130] > #   in $PATH.
	I1212 20:28:54.956979  398903 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1212 20:28:54.957012  398903 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1212 20:28:54.957045  398903 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1212 20:28:54.957066  398903 command_runner.go:130] > #   state.
	I1212 20:28:54.957088  398903 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1212 20:28:54.957122  398903 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1212 20:28:54.957146  398903 command_runner.go:130] > # - inherit_default_runtime (optional, bool): when true the runtime_path,
	I1212 20:28:54.957169  398903 command_runner.go:130] > #   runtime_type, runtime_root and runtime_config_path will be replaced by
	I1212 20:28:54.957202  398903 command_runner.go:130] > #   the values from the default runtime on load time.
	I1212 20:28:54.957227  398903 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1212 20:28:54.957250  398903 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1212 20:28:54.957281  398903 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1212 20:28:54.957305  398903 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1212 20:28:54.957327  398903 command_runner.go:130] > #   The currently recognized values are:
	I1212 20:28:54.957359  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1212 20:28:54.957385  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1212 20:28:54.957408  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1212 20:28:54.957450  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1212 20:28:54.957471  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1212 20:28:54.957498  398903 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1212 20:28:54.957534  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1212 20:28:54.957557  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1212 20:28:54.957580  398903 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1212 20:28:54.957613  398903 command_runner.go:130] > #   "seccomp-profile.kubernetes.cri-o.io" for setting the seccomp profile for:
	I1212 20:28:54.957636  398903 command_runner.go:130] > #     - a specific container by using: "seccomp-profile.kubernetes.cri-o.io/<CONTAINER_NAME>"
	I1212 20:28:54.957657  398903 command_runner.go:130] > #     - a whole pod by using: "seccomp-profile.kubernetes.cri-o.io/POD"
	I1212 20:28:54.957689  398903 command_runner.go:130] > #     Note that the annotation works on containers as well as on images.
	I1212 20:28:54.957712  398903 command_runner.go:130] > #     For images, the plain annotation "seccomp-profile.kubernetes.cri-o.io"
	I1212 20:28:54.957733  398903 command_runner.go:130] > #     can be used without the required "/POD" suffix or a container name.
	I1212 20:28:54.957769  398903 command_runner.go:130] > #   "io.kubernetes.cri-o.DisableFIPS" for disabling FIPS mode in a Kubernetes pod within a FIPS-enabled cluster.
	I1212 20:28:54.957795  398903 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1212 20:28:54.957816  398903 command_runner.go:130] > #   deprecated option "conmon".
	I1212 20:28:54.957848  398903 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1212 20:28:54.957870  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1212 20:28:54.957893  398903 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1212 20:28:54.957923  398903 command_runner.go:130] > #   should be moved to the container's cgroup
	I1212 20:28:54.957949  398903 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the monitor.
	I1212 20:28:54.957971  398903 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1212 20:28:54.958007  398903 command_runner.go:130] > #   When using the pod runtime and conmon-rs, then the monitor_env can be used to further configure
	I1212 20:28:54.958030  398903 command_runner.go:130] > #   conmon-rs by using:
	I1212 20:28:54.958053  398903 command_runner.go:130] > #     - LOG_DRIVER=[none,systemd,stdout] - Enable logging to the configured target, defaults to none.
	I1212 20:28:54.958092  398903 command_runner.go:130] > #     - HEAPTRACK_OUTPUT_PATH=/path/to/dir - Enable heaptrack profiling and save the files to the set directory.
	I1212 20:28:54.958133  398903 command_runner.go:130] > #     - HEAPTRACK_BINARY_PATH=/path/to/heaptrack - Enable heaptrack profiling and use set heaptrack binary.
	I1212 20:28:54.958204  398903 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1212 20:28:54.958225  398903 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1212 20:28:54.958278  398903 command_runner.go:130] > # - container_min_memory (optional, string): The minimum memory that must be set for a container.
	I1212 20:28:54.958303  398903 command_runner.go:130] > #   This value can be used to override the currently set global value for a specific runtime. If not set,
	I1212 20:28:54.958340  398903 command_runner.go:130] > #   a global default value of "12 MiB" will be used.
	I1212 20:28:54.958372  398903 command_runner.go:130] > # - no_sync_log (optional, bool): If set to true, the runtime will not sync the log file on rotate or container exit.
	I1212 20:28:54.958415  398903 command_runner.go:130] > #   This option is only valid for the 'oci' runtime type. Setting this option to true can cause data loss, e.g.
	I1212 20:28:54.958449  398903 command_runner.go:130] > #   when a machine crash happens.
	I1212 20:28:54.958472  398903 command_runner.go:130] > # - default_annotations (optional, map): Default annotations if not overridden by the pod spec.
	I1212 20:28:54.958496  398903 command_runner.go:130] > # - stream_websockets (optional, bool): Enable the WebSocket protocol for container exec, attach and port forward.
	I1212 20:28:54.958530  398903 command_runner.go:130] > # - seccomp_profile (optional, string): The absolute path of the seccomp.json profile which is used as the default
	I1212 20:28:54.958560  398903 command_runner.go:130] > #   seccomp profile for the runtime.
	I1212 20:28:54.958583  398903 command_runner.go:130] > #   If not specified or set to "", the runtime seccomp_profile will be used.
	I1212 20:28:54.958606  398903 command_runner.go:130] > #   If that is also not specified or set to "", the internal default seccomp profile will be applied.
	I1212 20:28:54.958635  398903 command_runner.go:130] > #
	I1212 20:28:54.958656  398903 command_runner.go:130] > # Using the seccomp notifier feature:
	I1212 20:28:54.958676  398903 command_runner.go:130] > #
	I1212 20:28:54.958708  398903 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1212 20:28:54.958738  398903 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1212 20:28:54.958756  398903 command_runner.go:130] > #
	I1212 20:28:54.958778  398903 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1212 20:28:54.958809  398903 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1212 20:28:54.958834  398903 command_runner.go:130] > #
	I1212 20:28:54.958854  398903 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1212 20:28:54.958874  398903 command_runner.go:130] > # feature.
	I1212 20:28:54.958903  398903 command_runner.go:130] > #
	I1212 20:28:54.958934  398903 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1212 20:28:54.958955  398903 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1212 20:28:54.958978  398903 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1212 20:28:54.959015  398903 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1212 20:28:54.959041  398903 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1212 20:28:54.959060  398903 command_runner.go:130] > #
	I1212 20:28:54.959092  398903 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1212 20:28:54.959116  398903 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1212 20:28:54.959135  398903 command_runner.go:130] > #
	I1212 20:28:54.959166  398903 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1212 20:28:54.959195  398903 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1212 20:28:54.959213  398903 command_runner.go:130] > #
	I1212 20:28:54.959234  398903 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1212 20:28:54.959264  398903 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1212 20:28:54.959290  398903 command_runner.go:130] > # limitation.
	I1212 20:28:54.959309  398903 command_runner.go:130] > [crio.runtime.runtimes.crun]
	I1212 20:28:54.959329  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/crun"
	I1212 20:28:54.959363  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959390  398903 command_runner.go:130] > runtime_root = "/run/crun"
	I1212 20:28:54.959409  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959429  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959460  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959486  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959503  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959521  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959541  398903 command_runner.go:130] > allowed_annotations = [
	I1212 20:28:54.959574  398903 command_runner.go:130] > 	"io.containers.trace-syscall",
	I1212 20:28:54.959593  398903 command_runner.go:130] > ]
	I1212 20:28:54.959612  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959644  398903 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1212 20:28:54.959671  398903 command_runner.go:130] > runtime_path = "/usr/libexec/crio/runc"
	I1212 20:28:54.959688  398903 command_runner.go:130] > runtime_type = ""
	I1212 20:28:54.959705  398903 command_runner.go:130] > runtime_root = "/run/runc"
	I1212 20:28:54.959727  398903 command_runner.go:130] > inherit_default_runtime = false
	I1212 20:28:54.959762  398903 command_runner.go:130] > runtime_config_path = ""
	I1212 20:28:54.959780  398903 command_runner.go:130] > container_min_memory = ""
	I1212 20:28:54.959800  398903 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1212 20:28:54.959819  398903 command_runner.go:130] > monitor_cgroup = "pod"
	I1212 20:28:54.959855  398903 command_runner.go:130] > monitor_exec_cgroup = ""
	I1212 20:28:54.959872  398903 command_runner.go:130] > privileged_without_host_devices = false
	I1212 20:28:54.959894  398903 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1212 20:28:54.959924  398903 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1212 20:28:54.959953  398903 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1212 20:28:54.959976  398903 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1212 20:28:54.960002  398903 command_runner.go:130] > # The currently supported resources are "cpuperiod" "cpuquota", "cpushares", "cpulimit" and "cpuset". The values for "cpuperiod" and "cpuquota" are denoted in microseconds.
	I1212 20:28:54.960047  398903 command_runner.go:130] > # The value for "cpulimit" is denoted in millicores, this value is used to calculate the "cpuquota" with the supplied "cpuperiod" or the default "cpuperiod".
	I1212 20:28:54.960072  398903 command_runner.go:130] > # Note that the "cpulimit" field overrides the "cpuquota" value supplied in this configuration.
	I1212 20:28:54.960106  398903 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1212 20:28:54.960135  398903 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1212 20:28:54.960156  398903 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1212 20:28:54.960176  398903 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1212 20:28:54.960207  398903 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1212 20:28:54.960236  398903 command_runner.go:130] > # Example:
	I1212 20:28:54.960257  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1212 20:28:54.960281  398903 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1212 20:28:54.960315  398903 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1212 20:28:54.960337  398903 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1212 20:28:54.960356  398903 command_runner.go:130] > # cpuset = "0-1"
	I1212 20:28:54.960392  398903 command_runner.go:130] > # cpushares = "5"
	I1212 20:28:54.960413  398903 command_runner.go:130] > # cpuquota = "1000"
	I1212 20:28:54.960435  398903 command_runner.go:130] > # cpuperiod = "100000"
	I1212 20:28:54.960473  398903 command_runner.go:130] > # cpulimit = "35"
	I1212 20:28:54.960495  398903 command_runner.go:130] > # Where:
	I1212 20:28:54.960507  398903 command_runner.go:130] > # The workload name is workload-type.
	I1212 20:28:54.960516  398903 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1212 20:28:54.960522  398903 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1212 20:28:54.960542  398903 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1212 20:28:54.960555  398903 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1212 20:28:54.960563  398903 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1212 20:28:54.960568  398903 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1212 20:28:54.960575  398903 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1212 20:28:54.960579  398903 command_runner.go:130] > # Default value is set to true
	I1212 20:28:54.960595  398903 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1212 20:28:54.960602  398903 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1212 20:28:54.960613  398903 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1212 20:28:54.960618  398903 command_runner.go:130] > # Default value is set to 'false'
	I1212 20:28:54.960623  398903 command_runner.go:130] > # disable_hostport_mapping = false
	I1212 20:28:54.960637  398903 command_runner.go:130] > # timezone To set the timezone for a container in CRI-O.
	I1212 20:28:54.960645  398903 command_runner.go:130] > # If an empty string is provided, CRI-O retains its default behavior. Use 'Local' to match the timezone of the host machine.
	I1212 20:28:54.960649  398903 command_runner.go:130] > # timezone = ""
	I1212 20:28:54.960656  398903 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1212 20:28:54.960661  398903 command_runner.go:130] > #
	I1212 20:28:54.960668  398903 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1212 20:28:54.960675  398903 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf.
	I1212 20:28:54.960682  398903 command_runner.go:130] > [crio.image]
	I1212 20:28:54.960688  398903 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1212 20:28:54.960693  398903 command_runner.go:130] > # default_transport = "docker://"
	I1212 20:28:54.960702  398903 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1212 20:28:54.960714  398903 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960719  398903 command_runner.go:130] > # global_auth_file = ""
	I1212 20:28:54.960724  398903 command_runner.go:130] > # The image used to instantiate infra containers.
	I1212 20:28:54.960730  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960738  398903 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.10.1"
	I1212 20:28:54.960745  398903 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1212 20:28:54.960758  398903 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1212 20:28:54.960764  398903 command_runner.go:130] > # This option supports live configuration reload.
	I1212 20:28:54.960770  398903 command_runner.go:130] > # pause_image_auth_file = ""
	I1212 20:28:54.960777  398903 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1212 20:28:54.960783  398903 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1212 20:28:54.960793  398903 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1212 20:28:54.960800  398903 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1212 20:28:54.960804  398903 command_runner.go:130] > # pause_command = "/pause"
	I1212 20:28:54.960810  398903 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1212 20:28:54.960819  398903 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1212 20:28:54.960828  398903 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1212 20:28:54.960837  398903 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1212 20:28:54.960843  398903 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1212 20:28:54.960855  398903 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1212 20:28:54.960859  398903 command_runner.go:130] > # pinned_images = [
	I1212 20:28:54.960863  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960869  398903 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1212 20:28:54.960879  398903 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1212 20:28:54.960885  398903 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1212 20:28:54.960891  398903 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1212 20:28:54.960902  398903 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1212 20:28:54.960910  398903 command_runner.go:130] > signature_policy = "/etc/crio/policy.json"
	I1212 20:28:54.960916  398903 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1212 20:28:54.960923  398903 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1212 20:28:54.960933  398903 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1212 20:28:54.960939  398903 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1212 20:28:54.960948  398903 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1212 20:28:54.960953  398903 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1212 20:28:54.960960  398903 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1212 20:28:54.960969  398903 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1212 20:28:54.960973  398903 command_runner.go:130] > # changing them here.
	I1212 20:28:54.960979  398903 command_runner.go:130] > # This option is deprecated. Use registries.conf file instead.
	I1212 20:28:54.960983  398903 command_runner.go:130] > # insecure_registries = [
	I1212 20:28:54.960986  398903 command_runner.go:130] > # ]
	I1212 20:28:54.960995  398903 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1212 20:28:54.961006  398903 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1212 20:28:54.961012  398903 command_runner.go:130] > # image_volumes = "mkdir"
	I1212 20:28:54.961020  398903 command_runner.go:130] > # Temporary directory to use for storing big files
	I1212 20:28:54.961026  398903 command_runner.go:130] > # big_files_temporary_dir = ""
	I1212 20:28:54.961032  398903 command_runner.go:130] > # If true, CRI-O will automatically reload the mirror registry when
	I1212 20:28:54.961042  398903 command_runner.go:130] > # there is an update to the 'registries.conf.d' directory. Default value is set to 'false'.
	I1212 20:28:54.961046  398903 command_runner.go:130] > # auto_reload_registries = false
	I1212 20:28:54.961054  398903 command_runner.go:130] > # The timeout for an image pull to make progress until the pull operation
	I1212 20:28:54.961062  398903 command_runner.go:130] > # gets canceled. This value will be also used for calculating the pull progress interval to pull_progress_timeout / 10.
	I1212 20:28:54.961069  398903 command_runner.go:130] > # Can be set to 0 to disable the timeout as well as the progress output.
	I1212 20:28:54.961077  398903 command_runner.go:130] > # pull_progress_timeout = "0s"
	I1212 20:28:54.961082  398903 command_runner.go:130] > # The mode of short name resolution.
	I1212 20:28:54.961089  398903 command_runner.go:130] > # The valid values are "enforcing" and "disabled", and the default is "enforcing".
	I1212 20:28:54.961100  398903 command_runner.go:130] > # If "enforcing", an image pull will fail if a short name is used, but the results are ambiguous.
	I1212 20:28:54.961105  398903 command_runner.go:130] > # If "disabled", the first result will be chosen.
	I1212 20:28:54.961112  398903 command_runner.go:130] > # short_name_mode = "enforcing"
	I1212 20:28:54.961118  398903 command_runner.go:130] > # OCIArtifactMountSupport is whether CRI-O should support OCI artifacts.
	I1212 20:28:54.961124  398903 command_runner.go:130] > # If set to false, mounting OCI Artifacts will result in an error.
	I1212 20:28:54.961132  398903 command_runner.go:130] > # oci_artifact_mount_support = true
	I1212 20:28:54.961138  398903 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1212 20:28:54.961142  398903 command_runner.go:130] > # CNI plugins.
	I1212 20:28:54.961146  398903 command_runner.go:130] > [crio.network]
	I1212 20:28:54.961152  398903 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1212 20:28:54.961159  398903 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1212 20:28:54.961164  398903 command_runner.go:130] > # cni_default_network = ""
	I1212 20:28:54.961171  398903 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1212 20:28:54.961179  398903 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1212 20:28:54.961185  398903 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1212 20:28:54.961189  398903 command_runner.go:130] > # plugin_dirs = [
	I1212 20:28:54.961195  398903 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1212 20:28:54.961198  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961209  398903 command_runner.go:130] > # List of included pod metrics.
	I1212 20:28:54.961213  398903 command_runner.go:130] > # included_pod_metrics = [
	I1212 20:28:54.961217  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961224  398903 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1212 20:28:54.961228  398903 command_runner.go:130] > [crio.metrics]
	I1212 20:28:54.961234  398903 command_runner.go:130] > # Globally enable or disable metrics support.
	I1212 20:28:54.961243  398903 command_runner.go:130] > # enable_metrics = false
	I1212 20:28:54.961248  398903 command_runner.go:130] > # Specify enabled metrics collectors.
	I1212 20:28:54.961253  398903 command_runner.go:130] > # Per default all metrics are enabled.
	I1212 20:28:54.961262  398903 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1212 20:28:54.961271  398903 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1212 20:28:54.961280  398903 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1212 20:28:54.961285  398903 command_runner.go:130] > # metrics_collectors = [
	I1212 20:28:54.961291  398903 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1212 20:28:54.961296  398903 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1212 20:28:54.961302  398903 command_runner.go:130] > # 	"containers_oom_total",
	I1212 20:28:54.961306  398903 command_runner.go:130] > # 	"processes_defunct",
	I1212 20:28:54.961311  398903 command_runner.go:130] > # 	"operations_total",
	I1212 20:28:54.961315  398903 command_runner.go:130] > # 	"operations_latency_seconds",
	I1212 20:28:54.961320  398903 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1212 20:28:54.961324  398903 command_runner.go:130] > # 	"operations_errors_total",
	I1212 20:28:54.961328  398903 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1212 20:28:54.961333  398903 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1212 20:28:54.961338  398903 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1212 20:28:54.961342  398903 command_runner.go:130] > # 	"image_pulls_success_total",
	I1212 20:28:54.961346  398903 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1212 20:28:54.961351  398903 command_runner.go:130] > # 	"containers_oom_count_total",
	I1212 20:28:54.961358  398903 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1212 20:28:54.961363  398903 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1212 20:28:54.961374  398903 command_runner.go:130] > # 	"containers_stopped_monitor_count",
	I1212 20:28:54.961377  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961383  398903 command_runner.go:130] > # The IP address or hostname on which the metrics server will listen.
	I1212 20:28:54.961389  398903 command_runner.go:130] > # metrics_host = "127.0.0.1"
	I1212 20:28:54.961394  398903 command_runner.go:130] > # The port on which the metrics server will listen.
	I1212 20:28:54.961398  398903 command_runner.go:130] > # metrics_port = 9090
	I1212 20:28:54.961404  398903 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1212 20:28:54.961409  398903 command_runner.go:130] > # metrics_socket = ""
	I1212 20:28:54.961420  398903 command_runner.go:130] > # The certificate for the secure metrics server.
	I1212 20:28:54.961429  398903 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1212 20:28:54.961440  398903 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1212 20:28:54.961445  398903 command_runner.go:130] > # certificate on any modification event.
	I1212 20:28:54.961452  398903 command_runner.go:130] > # metrics_cert = ""
	I1212 20:28:54.961458  398903 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1212 20:28:54.961464  398903 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1212 20:28:54.961470  398903 command_runner.go:130] > # metrics_key = ""
	I1212 20:28:54.961476  398903 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1212 20:28:54.961480  398903 command_runner.go:130] > [crio.tracing]
	I1212 20:28:54.961487  398903 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1212 20:28:54.961491  398903 command_runner.go:130] > # enable_tracing = false
	I1212 20:28:54.961499  398903 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1212 20:28:54.961504  398903 command_runner.go:130] > # tracing_endpoint = "127.0.0.1:4317"
	I1212 20:28:54.961513  398903 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1212 20:28:54.961520  398903 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1212 20:28:54.961527  398903 command_runner.go:130] > # CRI-O NRI configuration.
	I1212 20:28:54.961530  398903 command_runner.go:130] > [crio.nri]
	I1212 20:28:54.961534  398903 command_runner.go:130] > # Globally enable or disable NRI.
	I1212 20:28:54.961544  398903 command_runner.go:130] > # enable_nri = true
	I1212 20:28:54.961548  398903 command_runner.go:130] > # NRI socket to listen on.
	I1212 20:28:54.961553  398903 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1212 20:28:54.961559  398903 command_runner.go:130] > # NRI plugin directory to use.
	I1212 20:28:54.961564  398903 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1212 20:28:54.961569  398903 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1212 20:28:54.961574  398903 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1212 20:28:54.961579  398903 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1212 20:28:54.961660  398903 command_runner.go:130] > # nri_disable_connections = false
	I1212 20:28:54.961672  398903 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1212 20:28:54.961678  398903 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1212 20:28:54.961683  398903 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1212 20:28:54.961689  398903 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1212 20:28:54.961696  398903 command_runner.go:130] > # NRI default validator configuration.
	I1212 20:28:54.961703  398903 command_runner.go:130] > # If enabled, the builtin default validator can be used to reject a container if some
	I1212 20:28:54.961717  398903 command_runner.go:130] > # NRI plugin requested a restricted adjustment. Currently the following adjustments
	I1212 20:28:54.961722  398903 command_runner.go:130] > # can be restricted/rejected:
	I1212 20:28:54.961728  398903 command_runner.go:130] > # - OCI hook injection
	I1212 20:28:54.961734  398903 command_runner.go:130] > # - adjustment of runtime default seccomp profile
	I1212 20:28:54.961740  398903 command_runner.go:130] > # - adjustment of unconfied seccomp profile
	I1212 20:28:54.961747  398903 command_runner.go:130] > # - adjustment of a custom seccomp profile
	I1212 20:28:54.961752  398903 command_runner.go:130] > # - adjustment of linux namespaces
	I1212 20:28:54.961759  398903 command_runner.go:130] > # Additionally, the default validator can be used to reject container creation if any
	I1212 20:28:54.961766  398903 command_runner.go:130] > # of a required set of plugins has not processed a container creation request, unless
	I1212 20:28:54.961775  398903 command_runner.go:130] > # the container has been annotated to tolerate a missing plugin.
	I1212 20:28:54.961779  398903 command_runner.go:130] > #
	I1212 20:28:54.961783  398903 command_runner.go:130] > # [crio.nri.default_validator]
	I1212 20:28:54.961791  398903 command_runner.go:130] > # nri_enable_default_validator = false
	I1212 20:28:54.961796  398903 command_runner.go:130] > # nri_validator_reject_oci_hook_adjustment = false
	I1212 20:28:54.961802  398903 command_runner.go:130] > # nri_validator_reject_runtime_default_seccomp_adjustment = false
	I1212 20:28:54.961810  398903 command_runner.go:130] > # nri_validator_reject_unconfined_seccomp_adjustment = false
	I1212 20:28:54.961815  398903 command_runner.go:130] > # nri_validator_reject_custom_seccomp_adjustment = false
	I1212 20:28:54.961821  398903 command_runner.go:130] > # nri_validator_reject_namespace_adjustment = false
	I1212 20:28:54.961828  398903 command_runner.go:130] > # nri_validator_required_plugins = [
	I1212 20:28:54.961831  398903 command_runner.go:130] > # ]
	I1212 20:28:54.961838  398903 command_runner.go:130] > # nri_validator_tolerate_missing_plugins_annotation = ""
	I1212 20:28:54.961845  398903 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1212 20:28:54.961851  398903 command_runner.go:130] > [crio.stats]
	I1212 20:28:54.961860  398903 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1212 20:28:54.961866  398903 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1212 20:28:54.961872  398903 command_runner.go:130] > # stats_collection_period = 0
	I1212 20:28:54.961879  398903 command_runner.go:130] > # The number of seconds between collecting pod/container stats and pod
	I1212 20:28:54.961889  398903 command_runner.go:130] > # sandbox metrics. If set to 0, the metrics/stats are collected on-demand instead.
	I1212 20:28:54.961894  398903 command_runner.go:130] > # collection_period = 0
	I1212 20:28:54.961945  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912485774Z" level=info msg="Updating config from single file: /etc/crio/crio.conf"
	I1212 20:28:54.961961  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912523214Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf"
	I1212 20:28:54.961978  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912551908Z" level=info msg="Skipping not-existing config file \"/etc/crio/crio.conf\""
	I1212 20:28:54.961989  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912577237Z" level=info msg="Updating config from path: /etc/crio/crio.conf.d"
	I1212 20:28:54.962000  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912661332Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:28:54.962016  398903 command_runner.go:130] ! time="2025-12-12T20:28:54.912929282Z" level=info msg="Updating config from drop-in file: /etc/crio/crio.conf.d/10-crio.conf"
	I1212 20:28:54.962028  398903 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1212 20:28:54.962158  398903 cni.go:84] Creating CNI manager for ""
	I1212 20:28:54.962172  398903 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:28:54.962187  398903 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:28:54.962211  398903 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:28:54.962351  398903 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:28:54.962430  398903 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:28:54.969281  398903 command_runner.go:130] > kubeadm
	I1212 20:28:54.969300  398903 command_runner.go:130] > kubectl
	I1212 20:28:54.969304  398903 command_runner.go:130] > kubelet
	I1212 20:28:54.970141  398903 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:28:54.970208  398903 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:28:54.977797  398903 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:28:54.990948  398903 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:28:55.010887  398903 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1212 20:28:55.035195  398903 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:28:55.039688  398903 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1212 20:28:55.039770  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.162925  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:55.180455  398903 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:28:55.180486  398903 certs.go:195] generating shared ca certs ...
	I1212 20:28:55.180503  398903 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.180666  398903 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:28:55.180714  398903 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:28:55.180726  398903 certs.go:257] generating profile certs ...
	I1212 20:28:55.180830  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:28:55.180895  398903 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:28:55.180950  398903 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:28:55.180963  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:28:55.180976  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:28:55.180993  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:28:55.181015  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:28:55.181034  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:28:55.181047  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:28:55.181062  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:28:55.181077  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:28:55.181130  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:28:55.181167  398903 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:28:55.181180  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:28:55.181208  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:28:55.181238  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:28:55.181263  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:28:55.181322  398903 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:28:55.181358  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.181374  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.181387  398903 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.181918  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:28:55.205330  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:28:55.228282  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:28:55.247851  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:28:55.266269  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:28:55.284183  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:28:55.302120  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:28:55.319891  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:28:55.338073  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:28:55.356708  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:28:55.374821  398903 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:28:55.392459  398903 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:28:55.405239  398903 ssh_runner.go:195] Run: openssl version
	I1212 20:28:55.411334  398903 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1212 20:28:55.411437  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.418985  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:28:55.426485  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430183  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430452  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.430510  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:28:55.471108  398903 command_runner.go:130] > b5213941
	I1212 20:28:55.471637  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:28:55.479292  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.486905  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:28:55.494608  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498479  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498582  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.498669  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:28:55.541933  398903 command_runner.go:130] > 51391683
	I1212 20:28:55.542454  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:28:55.550083  398903 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.558343  398903 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:28:55.567964  398903 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571832  398903 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571862  398903 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.571932  398903 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:28:55.617329  398903 command_runner.go:130] > 3ec20f2e
	I1212 20:28:55.617911  398903 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:28:55.625593  398903 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629390  398903 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:28:55.629419  398903 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1212 20:28:55.629427  398903 command_runner.go:130] > Device: 259,1	Inode: 1315224     Links: 1
	I1212 20:28:55.629433  398903 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1212 20:28:55.629439  398903 command_runner.go:130] > Access: 2025-12-12 20:24:47.845478497 +0000
	I1212 20:28:55.629445  398903 command_runner.go:130] > Modify: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629449  398903 command_runner.go:130] > Change: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629454  398903 command_runner.go:130] >  Birth: 2025-12-12 20:20:43.170948183 +0000
	I1212 20:28:55.629525  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:28:55.669986  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.670463  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:28:55.711204  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.711650  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:28:55.751880  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.752298  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:28:55.793260  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.793349  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:28:55.836082  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.836162  398903 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:28:55.878637  398903 command_runner.go:130] > Certificate will not expire
	I1212 20:28:55.879114  398903 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:28:55.879241  398903 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:28:55.879321  398903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:28:55.906646  398903 cri.go:89] found id: ""
	I1212 20:28:55.906721  398903 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:28:55.913746  398903 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1212 20:28:55.913771  398903 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1212 20:28:55.913778  398903 command_runner.go:130] > /var/lib/minikube/etcd:
	I1212 20:28:55.914790  398903 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:28:55.914807  398903 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:28:55.914874  398903 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:28:55.922292  398903 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:28:55.922687  398903 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-261311" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.922785  398903 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "functional-261311" cluster setting kubeconfig missing "functional-261311" context setting]
	I1212 20:28:55.923055  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.923461  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.923610  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.924164  398903 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:28:55.924185  398903 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:28:55.924192  398903 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:28:55.924198  398903 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:28:55.924202  398903 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:28:55.924512  398903 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:28:55.924617  398903 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:28:55.932459  398903 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:28:55.932497  398903 kubeadm.go:602] duration metric: took 17.683266ms to restartPrimaryControlPlane
	I1212 20:28:55.932527  398903 kubeadm.go:403] duration metric: took 53.402973ms to StartCluster
	I1212 20:28:55.932549  398903 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.932634  398903 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.933272  398903 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:28:55.933478  398903 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:28:55.933879  398903 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:28:55.933961  398903 addons.go:70] Setting storage-provisioner=true in profile "functional-261311"
	I1212 20:28:55.933975  398903 addons.go:239] Setting addon storage-provisioner=true in "functional-261311"
	I1212 20:28:55.933999  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.933941  398903 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:28:55.934065  398903 addons.go:70] Setting default-storageclass=true in profile "functional-261311"
	I1212 20:28:55.934077  398903 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-261311"
	I1212 20:28:55.934349  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.934437  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.939847  398903 out.go:179] * Verifying Kubernetes components...
	I1212 20:28:55.942718  398903 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:28:55.970904  398903 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1212 20:28:55.971648  398903 loader.go:402] Config loaded from file:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:28:55.971825  398903 kapi.go:59] client config for functional-261311: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:28:55.972098  398903 addons.go:239] Setting addon default-storageclass=true in "functional-261311"
	I1212 20:28:55.972128  398903 host.go:66] Checking if "functional-261311" exists ...
	I1212 20:28:55.972592  398903 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:28:55.974802  398903 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:55.974826  398903 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1212 20:28:55.974884  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.016147  398903 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.016169  398903 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1212 20:28:56.016234  398903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:28:56.029989  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.052293  398903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:28:56.147892  398903 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:28:56.182806  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:56.199875  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:56.957368  398903 node_ready.go:35] waiting up to 6m0s for node "functional-261311" to be "Ready" ...
	I1212 20:28:56.957463  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957488  398903 type.go:168] "Request Body" body=""
	I1212 20:28:56.957545  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	W1212 20:28:56.957546  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957630  398903 retry.go:31] will retry after 313.594755ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957713  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:56.957754  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957788  398903 retry.go:31] will retry after 317.565464ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:56.957910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.272396  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.275890  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.344322  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.344435  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.344471  398903 retry.go:31] will retry after 221.297028ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351139  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.351181  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.351200  398903 retry.go:31] will retry after 309.802672ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.458417  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.458511  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:57.566100  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:57.625592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.625687  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.625728  398903 retry.go:31] will retry after 499.665469ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.661822  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:57.729487  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:57.729527  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.729550  398903 retry.go:31] will retry after 503.664724ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:57.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:28:57.958134  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:57.958421  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.126013  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:58.197757  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.197828  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.197853  398903 retry.go:31] will retry after 1.10540153s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.234015  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:58.297441  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:58.297548  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.297576  398903 retry.go:31] will retry after 1.092264057s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:58.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:58.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:28:58.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:58.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:28:58.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:28:59.303542  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:28:59.364708  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.364773  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.364796  398903 retry.go:31] will retry after 1.503349263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.390910  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:28:59.449881  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:28:59.449970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.450009  398903 retry.go:31] will retry after 1.024940216s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:28:59.457981  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.458049  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.458335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:28:59.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:28:59.957671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:28:59.957942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.457683  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:00.475497  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:00.543993  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.544048  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.544072  398903 retry.go:31] will retry after 2.24833219s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.868438  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:00.926476  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:00.930138  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.930173  398903 retry.go:31] will retry after 1.556562441s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:00.958315  398903 type.go:168] "Request Body" body=""
	I1212 20:29:00.958392  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:00.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:00.958787  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:01.458585  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.458668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.458995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:01.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:29:01.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:01.958122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.457889  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.457969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.458299  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:02.487755  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:02.545597  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.549667  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.549705  398903 retry.go:31] will retry after 1.726891228s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.793114  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:02.856403  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:02.860058  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.860101  398903 retry.go:31] will retry after 3.686133541s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:02.958383  398903 type.go:168] "Request Body" body=""
	I1212 20:29:02.958453  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:02.958724  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:03.458506  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.458589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.458945  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:03.459000  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:03.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:29:03.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:03.958210  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.277666  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:04.331675  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:04.335668  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.335700  398903 retry.go:31] will retry after 4.014847664s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:04.457944  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.458019  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.458285  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:04.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:04.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:04.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.457751  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.457828  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:05.958009  398903 type.go:168] "Request Body" body=""
	I1212 20:29:05.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:05.958416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:05.958469  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:06.458265  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:06.546991  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:06.607592  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:06.607644  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.607664  398903 retry.go:31] will retry after 4.884355554s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:06.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:29:06.958195  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:06.958538  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.458326  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.458394  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.458746  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:07.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:07.958480  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:07.958781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:07.958832  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:08.351452  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:08.404529  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:08.407970  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.408008  398903 retry.go:31] will retry after 4.723006947s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:08.458208  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.458304  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:08.958349  398903 type.go:168] "Request Body" body=""
	I1212 20:29:08.958418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:08.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.458637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.458962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:09.957658  398903 type.go:168] "Request Body" body=""
	I1212 20:29:09.957734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:09.958100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:10.458537  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.458602  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.458869  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:10.458910  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:10.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:10.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.458416  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:11.492814  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:11.557889  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:11.557940  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.557960  398903 retry.go:31] will retry after 4.177574733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:11.958412  398903 type.go:168] "Request Body" body=""
	I1212 20:29:11.958494  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:11.958766  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:12.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.458627  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.458916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:12.458972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:12.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:29:12.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:12.958047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.131713  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:13.192350  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:13.192414  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.192433  398903 retry.go:31] will retry after 8.846505763s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:13.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:13.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:29:13.957726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:13.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.457780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.457878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.458172  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:14.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:29:14.957968  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:14.958296  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:14.958356  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:15.457665  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.457745  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.458081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:15.737088  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:15.794323  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:15.794363  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.794386  398903 retry.go:31] will retry after 13.823463892s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:15.958001  398903 type.go:168] "Request Body" body=""
	I1212 20:29:15.958077  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:15.958395  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.458178  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.458264  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.458517  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:16.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:29:16.958364  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:16.958733  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:16.958807  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:17.458384  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.458800  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:17.958573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:17.958679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:17.958934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:18.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:18.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:18.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:19.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:19.458044  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:19.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:19.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:19.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.457635  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.458035  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:20.957568  398903 type.go:168] "Request Body" body=""
	I1212 20:29:20.957646  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:20.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:21.457974  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.458051  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.458401  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:21.458459  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:21.958216  398903 type.go:168] "Request Body" body=""
	I1212 20:29:21.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:21.958620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.040027  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:22.098166  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:22.102301  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.102333  398903 retry.go:31] will retry after 9.311877294s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:22.458542  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.458608  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.458864  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:22.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:29:22.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:22.957965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:23.957780  398903 type.go:168] "Request Body" body=""
	I1212 20:29:23.957869  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:23.958143  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:23.958184  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:24.457666  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.457740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:24.957754  398903 type.go:168] "Request Body" body=""
	I1212 20:29:24.957831  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:24.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.457956  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:25.958502  398903 type.go:168] "Request Body" body=""
	I1212 20:29:25.958583  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:25.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:25.958993  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:26.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.458131  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:26.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:29:26.957860  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:26.958177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.457614  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.457693  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:27.957616  398903 type.go:168] "Request Body" body=""
	I1212 20:29:27.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:27.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:28.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.458119  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:28.458170  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:28.957619  398903 type.go:168] "Request Body" body=""
	I1212 20:29:28.957713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:28.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.457661  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.457736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.458113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:29.618498  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:29.673247  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:29.677091  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.677126  398903 retry.go:31] will retry after 12.247484069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:29.958487  398903 type.go:168] "Request Body" body=""
	I1212 20:29:29.958556  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:29.958828  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:30.957764  398903 type.go:168] "Request Body" body=""
	I1212 20:29:30.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:30.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:30.958221  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:31.415106  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:29:31.457708  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.457795  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:31.477657  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:31.481452  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.481486  398903 retry.go:31] will retry after 29.999837192s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:31.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:31.958329  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:31.958678  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.458335  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.458415  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:32.958367  398903 type.go:168] "Request Body" body=""
	I1212 20:29:32.958440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:32.958702  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:32.958743  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:33.458498  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.458574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.458942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:33.957518  398903 type.go:168] "Request Body" body=""
	I1212 20:29:33.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:33.957939  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.457617  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.457695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:34.957613  398903 type.go:168] "Request Body" body=""
	I1212 20:29:34.957696  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:34.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:35.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.458075  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:35.458135  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:35.957713  398903 type.go:168] "Request Body" body=""
	I1212 20:29:35.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:35.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.457989  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.458070  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.458457  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:36.958268  398903 type.go:168] "Request Body" body=""
	I1212 20:29:36.958361  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:36.958681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:37.458419  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.458489  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.458760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:37.458803  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:37.958548  398903 type.go:168] "Request Body" body=""
	I1212 20:29:37.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:37.958989  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.457703  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.457783  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.458130  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:38.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:29:38.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:38.957909  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:39.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:39.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:39.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:39.958142  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:40.458512  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.458875  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:40.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:40.957663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:40.957999  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.458005  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.458079  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.458415  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:41.924900  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:29:41.958510  398903 type.go:168] "Request Body" body=""
	I1212 20:29:41.958584  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:41.958850  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:41.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:42.001052  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:29:42.001094  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.001115  398903 retry.go:31] will retry after 30.772279059s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:29:42.457672  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.457755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.458082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:42.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:29:42.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:42.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.458540  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.458610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.458870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:43.957586  398903 type.go:168] "Request Body" body=""
	I1212 20:29:43.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:43.958032  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:44.457633  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.457707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.458045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:44.458100  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:44.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:29:44.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:44.958170  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.457726  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.458152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:45.957997  398903 type.go:168] "Request Body" body=""
	I1212 20:29:45.958081  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:45.958445  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:46.458286  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.458355  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.458622  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:46.458663  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:46.958455  398903 type.go:168] "Request Body" body=""
	I1212 20:29:46.958553  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:46.958947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.457794  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.457932  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.458463  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:47.958292  398903 type.go:168] "Request Body" body=""
	I1212 20:29:47.958370  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:47.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:48.458483  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.458899  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:48.458971  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:48.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:29:48.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:48.958090  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.457649  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.457920  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:49.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:29:49.957681  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:49.958050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.457756  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.457838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.458163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:50.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:29:50.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:50.957983  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:50.958033  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:51.457978  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.458054  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.458398  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:51.958201  398903 type.go:168] "Request Body" body=""
	I1212 20:29:51.958282  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:51.958598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.458345  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.458418  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:52.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:29:52.958540  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:52.958883  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:52.958945  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:53.457615  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.457698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:53.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:29:53.957674  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:53.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.458053  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:54.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:29:54.957892  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:54.958225  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:55.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.457654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.457934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:55.457987  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:55.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:29:55.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:55.958319  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.458108  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.458185  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.458525  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:56.958251  398903 type.go:168] "Request Body" body=""
	I1212 20:29:56.958317  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:56.958572  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:57.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:57.458880  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:29:57.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:29:57.957685  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:57.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.457591  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.457943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:58.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:29:58.957737  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:58.958104  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.457826  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.457924  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.458273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:29:59.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:29:59.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:29:59.958054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:29:59.958118  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:00.457778  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.457870  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:00.958235  398903 type.go:168] "Request Body" body=""
	I1212 20:30:00.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:00.958755  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.460861  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.460950  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.461277  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:01.481640  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:01.559465  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:01.559521  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.559544  398903 retry.go:31] will retry after 33.36515596s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:01.958099  398903 type.go:168] "Request Body" body=""
	I1212 20:30:01.958188  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:01.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:01.958533  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:02.458305  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.458381  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.458719  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:02.958386  398903 type.go:168] "Request Body" body=""
	I1212 20:30:02.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:02.958745  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.457579  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.457694  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:30:03.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:03.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:04.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.458056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:04.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:04.957692  398903 type.go:168] "Request Body" body=""
	I1212 20:30:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:04.958103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.457691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.457777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.458124  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:05.958166  398903 type.go:168] "Request Body" body=""
	I1212 20:30:05.958257  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:05.958561  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:06.458375  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.458451  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.458788  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:06.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:06.957529  398903 type.go:168] "Request Body" body=""
	I1212 20:30:06.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:06.957955  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.457552  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.457657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:07.957700  398903 type.go:168] "Request Body" body=""
	I1212 20:30:07.957780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:07.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.457728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.458065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:08.957730  398903 type.go:168] "Request Body" body=""
	I1212 20:30:08.957837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:08.958111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:08.958162  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:09.457851  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.457929  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.458309  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:09.958049  398903 type.go:168] "Request Body" body=""
	I1212 20:30:09.958147  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:09.958566  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.458707  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:10.958517  398903 type.go:168] "Request Body" body=""
	I1212 20:30:10.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:10.958916  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:10.958976  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:11.457913  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.458009  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.458358  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:11.958078  398903 type.go:168] "Request Body" body=""
	I1212 20:30:11.958148  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:11.958429  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.458295  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.458371  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.458726  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:12.774318  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:12.840421  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:12.840464  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.840483  398903 retry.go:31] will retry after 30.011296842s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1212 20:30:12.957679  398903 type.go:168] "Request Body" body=""
	I1212 20:30:12.957756  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:12.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:13.457610  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:13.457978  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:13.957691  398903 type.go:168] "Request Body" body=""
	I1212 20:30:13.957779  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:13.958199  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.457821  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.458184  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:14.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:30:14.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:14.958021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:15.457670  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.457751  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.458088  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:15.458148  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:15.958126  398903 type.go:168] "Request Body" body=""
	I1212 20:30:15.958215  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:15.958644  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.458362  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.458429  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.458692  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:16.958433  398903 type.go:168] "Request Body" body=""
	I1212 20:30:16.958508  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:16.958865  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:17.458563  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.458662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.459072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:17.459137  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:17.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:30:17.957765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:17.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:18.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:30:18.957740  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:18.958158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.457570  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.457653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:19.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:30:19.957747  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:19.958095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:19.958157  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:20.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.457785  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.458135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:20.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:30:20.957690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:20.958023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.458249  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.458570  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:21.958397  398903 type.go:168] "Request Body" body=""
	I1212 20:30:21.958474  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:21.958860  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:21.958919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:22.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.457650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.457962  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:22.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:30:22.957818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:22.958168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:23.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:23.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:23.957979  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:24.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:24.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:24.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:30:24.957748  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:24.958123  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.457534  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.457604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.457872  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:25.958565  398903 type.go:168] "Request Body" body=""
	I1212 20:30:25.958637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:25.958933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:26.457975  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.458048  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.458392  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:26.458450  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:26.957925  398903 type.go:168] "Request Body" body=""
	I1212 20:30:26.957996  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:26.958288  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.457662  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.457734  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.458086  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:27.957807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:27.957887  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:27.958218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.457696  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:28.957686  398903 type.go:168] "Request Body" body=""
	I1212 20:30:28.957778  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:28.958129  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:28.958185  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:29.457860  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.457948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.458268  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:29.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:30:29.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:29.957934  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.457654  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:30.957783  398903 type.go:168] "Request Body" body=""
	I1212 20:30:30.957859  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:30.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:30.958301  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:31.458270  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.458363  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.458639  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:31.958457  398903 type.go:168] "Request Body" body=""
	I1212 20:30:31.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:31.958925  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.457675  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.457752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:32.957526  398903 type.go:168] "Request Body" body=""
	I1212 20:30:32.957599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:32.957876  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:33.457638  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:33.458151  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:33.957835  398903 type.go:168] "Request Body" body=""
	I1212 20:30:33.957912  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:33.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.457709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.458076  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.925852  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1212 20:30:34.958350  398903 type.go:168] "Request Body" body=""
	I1212 20:30:34.958426  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:34.958704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:34.987024  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990602  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:34.990708  398903 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:35.458275  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.458354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.458681  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:35.458739  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:35.958407  398903 type.go:168] "Request Body" body=""
	I1212 20:30:35.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:35.958762  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.457712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.458038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:36.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:36.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:36.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.457711  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.457790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.458074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:37.957761  398903 type.go:168] "Request Body" body=""
	I1212 20:30:37.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:37.958213  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:37.958272  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:38.457940  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.458016  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:38.958134  398903 type.go:168] "Request Body" body=""
	I1212 20:30:38.958210  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:38.958478  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.458248  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.458336  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.458729  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:39.958456  398903 type.go:168] "Request Body" body=""
	I1212 20:30:39.958539  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:39.958888  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:39.958942  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:40.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.457648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.457967  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:40.957645  398903 type.go:168] "Request Body" body=""
	I1212 20:30:40.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:40.958059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:41.958252  398903 type.go:168] "Request Body" body=""
	I1212 20:30:41.958327  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:41.958608  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:42.458416  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.458492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.458825  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:42.458889  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:42.852572  398903 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1212 20:30:42.917565  398903 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921658  398903 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1212 20:30:42.921759  398903 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1212 20:30:42.924799  398903 out.go:179] * Enabled addons: 
	I1212 20:30:42.926930  398903 addons.go:530] duration metric: took 1m46.993054127s for enable addons: enabled=[]
	I1212 20:30:42.957819  398903 type.go:168] "Request Body" body=""
	I1212 20:30:42.957896  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:42.958219  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.457528  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.457600  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:43.957607  398903 type.go:168] "Request Body" body=""
	I1212 20:30:43.957687  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:43.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.457688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.458022  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:44.957587  398903 type.go:168] "Request Body" body=""
	I1212 20:30:44.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:44.957941  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:44.957982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:45.457697  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.457796  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.458121  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:45.958191  398903 type.go:168] "Request Body" body=""
	I1212 20:30:45.958294  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:45.958612  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.458444  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.458532  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.458807  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:46.957599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:46.957698  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:46.958064  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:46.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:47.457807  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.458266  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:47.957963  398903 type.go:168] "Request Body" body=""
	I1212 20:30:47.958044  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:47.958323  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.457878  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.457954  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.458353  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:48.957937  398903 type.go:168] "Request Body" body=""
	I1212 20:30:48.958025  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:48.958407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:48.958465  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:49.458150  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.458217  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.458483  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:49.958339  398903 type.go:168] "Request Body" body=""
	I1212 20:30:49.958422  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:49.958782  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.457522  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.457619  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:50.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:30:50.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:50.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:51.457956  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.458033  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.458372  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:51.458436  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:51.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:30:51.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:51.958760  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.458531  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.458606  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.458887  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:52.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:30:52.957701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.457803  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.457880  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.458232  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:53.957948  398903 type.go:168] "Request Body" body=""
	I1212 20:30:53.958039  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:53.958314  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:53.958357  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:54.458007  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.458120  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.458562  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:54.957657  398903 type.go:168] "Request Body" body=""
	I1212 20:30:54.957767  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:54.958125  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.457599  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.457671  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.458062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:55.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:30:55.958592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:55.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:55.959020  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:56.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.457702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.458059  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:56.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:30:56.957655  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:56.957949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.457710  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.458063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:57.958430  398903 type.go:168] "Request Body" body=""
	I1212 20:30:57.958528  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:57.958868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:58.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:30:58.458062  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:30:58.957718  398903 type.go:168] "Request Body" body=""
	I1212 20:30:58.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:58.958154  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.457651  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:30:59.957798  398903 type.go:168] "Request Body" body=""
	I1212 20:30:59.957888  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:30:59.958201  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:00.457692  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.457780  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:00.458250  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:00.957940  398903 type.go:168] "Request Body" body=""
	I1212 20:31:00.958024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:00.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.458223  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.458299  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.458574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:01.958306  398903 type.go:168] "Request Body" body=""
	I1212 20:31:01.958388  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:01.958736  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:02.458565  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.458645  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.459016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:02.459076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:02.957720  398903 type.go:168] "Request Body" body=""
	I1212 20:31:02.957798  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:02.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.457664  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.458099  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:03.957853  398903 type.go:168] "Request Body" body=""
	I1212 20:31:03.957937  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:03.958274  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.457595  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.458030  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:04.957597  398903 type.go:168] "Request Body" body=""
	I1212 20:31:04.957676  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:04.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:04.958098  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:05.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.457701  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:05.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:05.957863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:05.958194  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.458145  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.458228  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:06.958415  398903 type.go:168] "Request Body" body=""
	I1212 20:31:06.958493  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:06.958820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:06.958879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:07.457506  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.457575  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.457849  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:07.957622  398903 type.go:168] "Request Body" body=""
	I1212 20:31:07.957714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:07.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.457776  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.457879  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.458223  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:08.957577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:08.957652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:08.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:09.457626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.457705  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:09.458076  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:09.957794  398903 type.go:168] "Request Body" body=""
	I1212 20:31:09.957907  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:09.958279  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.457971  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.458047  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.458382  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:10.958220  398903 type.go:168] "Request Body" body=""
	I1212 20:31:10.958321  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:10.958714  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:11.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:11.458138  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:11.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:11.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:11.957969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.457612  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.458031  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:12.957743  398903 type.go:168] "Request Body" body=""
	I1212 20:31:12.957841  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:12.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:13.458376  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.458443  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.458763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:13.458818  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:13.958577  398903 type.go:168] "Request Body" body=""
	I1212 20:31:13.958652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:13.958977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.458101  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:14.957799  398903 type.go:168] "Request Body" body=""
	I1212 20:31:14.957875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:14.958197  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.457653  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.457732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.458080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:15.958122  398903 type.go:168] "Request Body" body=""
	I1212 20:31:15.958204  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:15.958537  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:15.958599  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:16.458429  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.458501  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:16.957534  398903 type.go:168] "Request Body" body=""
	I1212 20:31:16.957617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:16.957998  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.457728  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.457806  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.458115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:17.957591  398903 type.go:168] "Request Body" body=""
	I1212 20:31:17.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:17.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:18.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.457847  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.458133  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:18.458180  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:18.957696  398903 type.go:168] "Request Body" body=""
	I1212 20:31:18.957790  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:18.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.457727  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.458140  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:19.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:31:19.957742  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:19.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.457686  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.457762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:20.957576  398903 type.go:168] "Request Body" body=""
	I1212 20:31:20.957650  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:20.957923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:20.957972  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:21.457915  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.457990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.458320  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:21.958165  398903 type.go:168] "Request Body" body=""
	I1212 20:31:21.958276  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:21.958607  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.458365  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.458440  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.458716  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:22.958558  398903 type.go:168] "Request Body" body=""
	I1212 20:31:22.958659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:22.959007  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:22.959071  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:23.457766  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:23.957896  398903 type.go:168] "Request Body" body=""
	I1212 20:31:23.957969  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:23.958315  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.457613  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.457714  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:24.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:24.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:24.958115  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:25.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:25.458017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:25.958041  398903 type.go:168] "Request Body" body=""
	I1212 20:31:25.958123  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:25.958512  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.458319  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.458398  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.458689  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:26.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:31:26.958549  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:26.958846  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:27.457587  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.457677  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.457993  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:27.458047  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:27.957637  398903 type.go:168] "Request Body" body=""
	I1212 20:31:27.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:27.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.457523  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.457597  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.457900  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:28.957667  398903 type.go:168] "Request Body" body=""
	I1212 20:31:28.957755  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:28.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:29.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.458112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:29.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:29.957515  398903 type.go:168] "Request Body" body=""
	I1212 20:31:29.957590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:29.957922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.457715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.458057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:30.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:31:30.957854  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:31.458036  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.458104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.458369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:31.458409  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:31.958181  398903 type.go:168] "Request Body" body=""
	I1212 20:31:31.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:31.958643  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.458473  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.458585  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.458949  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:32.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:31:32.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:32.958012  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.457738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:33.957824  398903 type.go:168] "Request Body" body=""
	I1212 20:31:33.957905  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:33.958247  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:33.958303  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:34.458003  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.458078  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.458409  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:34.958240  398903 type.go:168] "Request Body" body=""
	I1212 20:31:34.958349  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:34.958734  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.458572  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.458682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.459077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:35.958480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:35.958555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:35.958847  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:35.958891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:36.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.458167  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:36.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:31:36.957948  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:36.958275  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.457594  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.457978  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:37.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:31:37.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:37.958057  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:38.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:38.458189  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:38.957510  398903 type.go:168] "Request Body" body=""
	I1212 20:31:38.957592  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:38.957862  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.457578  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.457664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:39.957715  398903 type.go:168] "Request Body" body=""
	I1212 20:31:39.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:39.958106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.457964  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:40.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:31:40.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:40.958114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:40.958173  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:41.457926  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.458028  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.458354  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:41.958180  398903 type.go:168] "Request Body" body=""
	I1212 20:31:41.958256  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:41.958548  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.458349  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.458439  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.458833  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:42.958514  398903 type.go:168] "Request Body" body=""
	I1212 20:31:42.958594  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:42.958932  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:42.958992  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:43.457618  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.458058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:43.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:31:43.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:43.958071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.457779  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.457857  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.458177  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:44.957579  398903 type.go:168] "Request Body" body=""
	I1212 20:31:44.957657  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:44.957982  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:45.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.458010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:45.458070  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:45.957784  398903 type.go:168] "Request Body" body=""
	I1212 20:31:45.957877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:45.958249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.458071  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.458151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.458414  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:46.958212  398903 type.go:168] "Request Body" body=""
	I1212 20:31:46.958295  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:46.958642  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:47.458480  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.458558  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.458926  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:47.458982  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:47.957584  398903 type.go:168] "Request Body" body=""
	I1212 20:31:47.957658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:47.957921  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.457764  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.458171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:48.957862  398903 type.go:168] "Request Body" body=""
	I1212 20:31:48.957972  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:48.958326  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.458004  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.458083  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.458381  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:49.958209  398903 type.go:168] "Request Body" body=""
	I1212 20:31:49.958290  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:49.958636  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:49.958695  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:50.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.458818  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:50.957496  398903 type.go:168] "Request Body" body=""
	I1212 20:31:50.957563  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:50.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.457746  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.458084  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:51.957648  398903 type.go:168] "Request Body" body=""
	I1212 20:31:51.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:51.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:52.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.457781  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.458111  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:52.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:52.957662  398903 type.go:168] "Request Body" body=""
	I1212 20:31:52.957750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:52.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.457800  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.457898  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.458256  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:53.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:31:53.957647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:53.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:54.957782  398903 type.go:168] "Request Body" body=""
	I1212 20:31:54.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:54.958171  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:54.958225  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:55.457602  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.457942  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:55.957857  398903 type.go:168] "Request Body" body=""
	I1212 20:31:55.957935  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:55.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.458155  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.458540  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:56.958285  398903 type.go:168] "Request Body" body=""
	I1212 20:31:56.958359  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:56.958625  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:56.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:57.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.458485  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.458823  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:57.958474  398903 type.go:168] "Request Body" body=""
	I1212 20:31:57.958559  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:57.958919  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.457647  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.457965  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:58.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:31:58.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:58.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:31:59.457623  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.458016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:31:59.458072  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:31:59.957590  398903 type.go:168] "Request Body" body=""
	I1212 20:31:59.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:31:59.957976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.457722  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.457811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.458158  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:00.958017  398903 type.go:168] "Request Body" body=""
	I1212 20:32:00.958101  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:00.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:01.458294  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.458366  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.458700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:01.458759  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:01.958578  398903 type.go:168] "Request Body" body=""
	I1212 20:32:01.958660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:01.959010  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:02.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:32:02.957736  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:02.958135  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.457649  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.457731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:03.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:32:03.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:03.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:03.958124  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:04.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.457689  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:04.957738  398903 type.go:168] "Request Body" body=""
	I1212 20:32:04.957816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:04.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.457928  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.458292  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:05.958124  398903 type.go:168] "Request Body" body=""
	I1212 20:32:05.958202  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:05.958466  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:05.958511  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:06.458381  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.458469  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.458820  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:06.957560  398903 type.go:168] "Request Body" body=""
	I1212 20:32:06.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:06.958040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.457620  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.457897  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:07.957602  398903 type.go:168] "Request Body" body=""
	I1212 20:32:07.957684  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:07.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:08.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.458006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:08.458064  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:08.958540  398903 type.go:168] "Request Body" body=""
	I1212 20:32:08.958617  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:08.958908  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.457660  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.458015  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:09.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:32:09.957683  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:09.958016  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.457589  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.457668  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.457990  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:10.957644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:10.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:10.958058  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:10.958119  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:11.458077  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.458157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.458482  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:11.958236  398903 type.go:168] "Request Body" body=""
	I1212 20:32:11.958308  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:11.958586  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.458420  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.458497  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.458856  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:12.957555  398903 type.go:168] "Request Body" body=""
	I1212 20:32:12.957638  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:12.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:13.460759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.460830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.461068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:13.461109  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:13.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:32:13.957849  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:13.958216  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.458208  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:14.957890  398903 type.go:168] "Request Body" body=""
	I1212 20:32:14.957960  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:14.958230  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.457650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.458122  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:32:15.957985  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:15.958378  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:15.958434  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:16.458157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.458233  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.458504  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:16.958300  398903 type.go:168] "Request Body" body=""
	I1212 20:32:16.958386  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:16.958758  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.458562  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.458639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.458986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:17.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:32:17.957715  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:17.958109  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:18.457646  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.457720  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.458061  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:18.458116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:18.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:18.957731  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:18.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.457938  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:19.957698  398903 type.go:168] "Request Body" body=""
	I1212 20:32:19.957777  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:19.958136  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.457625  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.458047  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:20.957741  398903 type.go:168] "Request Body" body=""
	I1212 20:32:20.957811  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:20.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:20.958125  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:21.458048  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.458126  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.458473  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:21.958279  398903 type.go:168] "Request Body" body=""
	I1212 20:32:21.958354  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:21.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.458411  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.458484  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.458765  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:22.958550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:22.958632  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:22.958958  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:22.959017  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:23.457629  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:23.957725  398903 type.go:168] "Request Body" body=""
	I1212 20:32:23.957800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:23.958134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.458066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:24.957639  398903 type.go:168] "Request Body" body=""
	I1212 20:32:24.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:24.958081  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:25.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.457704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.458034  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:25.458090  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:25.958111  398903 type.go:168] "Request Body" body=""
	I1212 20:32:25.958187  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:25.958536  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.458306  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.458383  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.458747  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:26.958505  398903 type.go:168] "Request Body" body=""
	I1212 20:32:26.958576  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:26.958841  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:27.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.457680  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:27.458127  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:27.957787  398903 type.go:168] "Request Body" body=""
	I1212 20:32:27.957874  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:27.958233  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.457931  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.457998  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.458263  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:28.957554  398903 type.go:168] "Request Body" body=""
	I1212 20:32:28.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:28.957977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.457632  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.457711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:29.957530  398903 type.go:168] "Request Body" body=""
	I1212 20:32:29.957610  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:29.957906  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:29.957953  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:30.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.457697  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.458040  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:30.957778  398903 type.go:168] "Request Body" body=""
	I1212 20:32:30.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:30.958214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.458073  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.458140  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.458418  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:31.958203  398903 type.go:168] "Request Body" body=""
	I1212 20:32:31.958278  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:31.958617  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:31.958671  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:32.458448  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.458537  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.458868  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:32.957533  398903 type.go:168] "Request Body" body=""
	I1212 20:32:32.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:32.957933  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.458036  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:33.957656  398903 type.go:168] "Request Body" body=""
	I1212 20:32:33.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:33.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:34.457588  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.457997  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:34.458054  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:34.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:32:34.957770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:34.958112  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.457630  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.457708  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.458060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:35.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:32:35.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:35.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:36.458166  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.458243  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.458598  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:36.458654  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:36.958444  398903 type.go:168] "Request Body" body=""
	I1212 20:32:36.958533  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:36.958889  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.458453  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.458552  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.458884  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:37.957603  398903 type.go:168] "Request Body" body=""
	I1212 20:32:37.957686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:37.958038  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.457739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.458072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:38.957536  398903 type.go:168] "Request Body" body=""
	I1212 20:32:38.957609  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:38.957905  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:38.957951  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:39.457634  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.458054  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:39.957793  398903 type.go:168] "Request Body" body=""
	I1212 20:32:39.957878  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:39.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.458558  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.458626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.458896  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:40.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:32:40.957722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:40.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:40.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:41.457917  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.458003  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.458345  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:41.958008  398903 type.go:168] "Request Body" body=""
	I1212 20:32:41.958090  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:41.958391  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.458186  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.458268  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.458645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:42.958471  398903 type.go:168] "Request Body" body=""
	I1212 20:32:42.958551  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:42.958913  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:42.958969  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:43.457567  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.457639  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.457970  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:43.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:32:43.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:43.958127  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.457848  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.457925  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.458300  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:44.957921  398903 type.go:168] "Request Body" body=""
	I1212 20:32:44.957989  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:44.958269  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:45.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.458108  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:45.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:45.957919  398903 type.go:168] "Request Body" body=""
	I1212 20:32:45.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:45.958428  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.458249  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.458620  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:46.958392  398903 type.go:168] "Request Body" body=""
	I1212 20:32:46.958479  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:46.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.457550  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.457637  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.457976  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:47.957652  398903 type.go:168] "Request Body" body=""
	I1212 20:32:47.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:47.957996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:47.958035  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:48.457660  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.457733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.458085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:48.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:48.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:48.958068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.457759  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.458095  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:49.957642  398903 type.go:168] "Request Body" body=""
	I1212 20:32:49.957718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:49.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:49.958116  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:50.457791  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.457875  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.458204  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:50.957582  398903 type.go:168] "Request Body" body=""
	I1212 20:32:50.957654  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:50.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.457942  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.458024  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.458587  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:51.958377  398903 type.go:168] "Request Body" body=""
	I1212 20:32:51.958463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:51.958946  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:51.959008  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:52.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.457667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.457937  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:52.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:32:52.957732  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:52.958048  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.457745  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.457818  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.458155  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:53.958157  398903 type.go:168] "Request Body" body=""
	I1212 20:32:53.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:53.958497  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:54.458351  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.458785  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:54.458844  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:54.957837  398903 type.go:168] "Request Body" body=""
	I1212 20:32:54.957927  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:54.958377  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.458049  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:55.958082  398903 type.go:168] "Request Body" body=""
	I1212 20:32:55.958157  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:55.958506  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.458323  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.458789  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:56.958570  398903 type.go:168] "Request Body" body=""
	I1212 20:32:56.958641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:56.958907  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:56.958949  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:57.457601  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.458009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:57.957647  398903 type.go:168] "Request Body" body=""
	I1212 20:32:57.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:57.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.457771  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.457845  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.458182  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:58.957910  398903 type.go:168] "Request Body" body=""
	I1212 20:32:58.957990  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:58.958333  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:32:59.458167  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.458246  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.458600  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:32:59.458673  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:32:59.958419  398903 type.go:168] "Request Body" body=""
	I1212 20:32:59.958492  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:32:59.958763  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.458626  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.458718  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.459178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:00.957917  398903 type.go:168] "Request Body" body=""
	I1212 20:33:00.957999  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:00.958339  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.458146  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.458227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.458496  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:01.958247  398903 type.go:168] "Request Body" body=""
	I1212 20:33:01.958324  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:01.958679  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:01.958746  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:02.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.458595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.458922  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:02.957588  398903 type.go:168] "Request Body" body=""
	I1212 20:33:02.957664  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:02.957961  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.457658  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.457735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.458091  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:03.957689  398903 type.go:168] "Request Body" body=""
	I1212 20:33:03.957766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:03.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:04.457590  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.457666  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:04.458057  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:04.957694  398903 type.go:168] "Request Body" body=""
	I1212 20:33:04.957771  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:04.958097  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.457724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.458077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:05.957795  398903 type.go:168] "Request Body" body=""
	I1212 20:33:05.957876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:05.958156  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:06.458126  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.458201  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.458609  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:06.458666  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:06.958431  398903 type.go:168] "Request Body" body=""
	I1212 20:33:06.958510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:06.958861  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.458432  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.458505  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.458769  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:07.958549  398903 type.go:168] "Request Body" body=""
	I1212 20:33:07.958631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:07.958975  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.457668  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.457744  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.458100  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:08.957714  398903 type.go:168] "Request Body" body=""
	I1212 20:33:08.957786  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:08.958051  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:08.958096  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:09.457741  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.457817  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.458145  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:09.957623  398903 type.go:168] "Request Body" body=""
	I1212 20:33:09.957707  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:09.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.457657  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.457729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.458029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:10.957650  398903 type.go:168] "Request Body" body=""
	I1212 20:33:10.957729  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:10.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:10.958120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:11.457959  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.458036  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.458394  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:11.958170  398903 type.go:168] "Request Body" body=""
	I1212 20:33:11.958258  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:11.958549  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.458358  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.458435  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.458775  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:12.957520  398903 type.go:168] "Request Body" body=""
	I1212 20:33:12.957604  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:12.957972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:13.458501  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.458572  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.458848  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:13.458891  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:13.957574  398903 type.go:168] "Request Body" body=""
	I1212 20:33:13.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:13.957991  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.457577  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.457656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.457996  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:14.957521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:14.957595  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:14.957928  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.457515  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.457593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.457969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:15.957742  398903 type.go:168] "Request Body" body=""
	I1212 20:33:15.957819  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:15.958159  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:15.958212  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:16.457912  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.458249  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:16.957938  398903 type.go:168] "Request Body" body=""
	I1212 20:33:16.958013  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:16.958371  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.457988  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.458356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:17.957551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:17.957628  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:17.957895  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:18.457585  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.457663  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.458004  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:18.458060  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:18.957651  398903 type.go:168] "Request Body" body=""
	I1212 20:33:18.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:18.958085  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.457757  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.457827  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:19.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:19.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:19.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:20.457628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.457713  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.458050  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:20.458103  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:20.957580  398903 type.go:168] "Request Body" body=""
	I1212 20:33:20.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:20.957981  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.457718  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.457793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.458138  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:21.957850  398903 type.go:168] "Request Body" body=""
	I1212 20:33:21.957933  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:21.958282  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:22.457957  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.458031  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.458362  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:22.458419  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:22.958162  398903 type.go:168] "Request Body" body=""
	I1212 20:33:22.958237  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:22.958574  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.458385  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.458462  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.458816  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:23.958452  398903 type.go:168] "Request Body" body=""
	I1212 20:33:23.958525  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:23.958802  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:24.458538  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.458623  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.458972  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:24.459028  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:24.957567  398903 type.go:168] "Request Body" body=""
	I1212 20:33:24.957643  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:24.957987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.457655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.458002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:25.957886  398903 type.go:168] "Request Body" body=""
	I1212 20:33:25.957967  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:25.958322  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.458268  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.458344  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.458704  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:26.958389  398903 type.go:168] "Request Body" body=""
	I1212 20:33:26.958460  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:26.958721  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:26.958761  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:27.458544  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.458621  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.458969  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:27.957605  398903 type.go:168] "Request Body" body=""
	I1212 20:33:27.957682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:27.958006  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.457568  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.457642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.457915  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:28.957628  398903 type.go:168] "Request Body" body=""
	I1212 20:33:28.957711  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:28.958067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:29.457799  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.457877  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.458218  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:29.458292  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:29.957566  398903 type.go:168] "Request Body" body=""
	I1212 20:33:29.957640  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:29.957986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.457705  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.457788  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.458134  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:30.957840  398903 type.go:168] "Request Body" body=""
	I1212 20:33:30.957922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:30.958258  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:31.458070  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.458149  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.458407  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:31.458480  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:31.958244  398903 type.go:168] "Request Body" body=""
	I1212 20:33:31.958322  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:31.958670  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.458475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.458555  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.458902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:32.958470  398903 type.go:168] "Request Body" body=""
	I1212 20:33:32.958550  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:32.958844  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.457551  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.457948  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:33.957664  398903 type.go:168] "Request Body" body=""
	I1212 20:33:33.957738  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:33.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:33.958117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:34.457524  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.457599  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.457902  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:34.957627  398903 type.go:168] "Request Body" body=""
	I1212 20:33:34.957704  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:34.958079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.457914  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.458250  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:35.958142  398903 type.go:168] "Request Body" body=""
	I1212 20:33:35.958225  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:35.958508  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:35.958562  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:36.458394  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.458478  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.458822  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:36.957589  398903 type.go:168] "Request Body" body=""
	I1212 20:33:36.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:36.958009  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.457586  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.457669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.458096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:37.957660  398903 type.go:168] "Request Body" body=""
	I1212 20:33:37.957739  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:37.958113  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:38.457820  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.457902  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.458236  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:38.458295  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:38.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:33:38.957699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:38.958001  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.457722  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.458021  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:39.957655  398903 type.go:168] "Request Body" body=""
	I1212 20:33:39.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:39.958083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.457768  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.457840  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.458168  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:40.957672  398903 type.go:168] "Request Body" body=""
	I1212 20:33:40.957758  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:40.958165  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:40.958231  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:41.458222  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.458298  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.458630  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:41.958341  398903 type.go:168] "Request Body" body=""
	I1212 20:33:41.958427  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:41.958700  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.458517  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.458591  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.458943  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:42.957649  398903 type.go:168] "Request Body" body=""
	I1212 20:33:42.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:42.958066  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:43.457746  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.457813  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.458089  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:43.458129  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:43.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:43.957883  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:43.958248  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.457980  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.458055  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.458393  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:44.958151  398903 type.go:168] "Request Body" body=""
	I1212 20:33:44.958223  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:44.958490  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:45.458269  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.458343  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.458708  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:45.458764  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:45.958513  398903 type.go:168] "Request Body" body=""
	I1212 20:33:45.958590  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:45.958931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.457565  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.457633  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:46.957631  398903 type.go:168] "Request Body" body=""
	I1212 20:33:46.957733  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:46.958128  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.457846  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.457922  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.458245  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:47.957545  398903 type.go:168] "Request Body" body=""
	I1212 20:33:47.957618  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:47.957914  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:47.957963  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:48.457643  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.457727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.458067  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:48.957629  398903 type.go:168] "Request Body" body=""
	I1212 20:33:48.957712  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:48.958060  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.457729  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.457799  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.458103  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:49.957633  398903 type.go:168] "Request Body" body=""
	I1212 20:33:49.957725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:49.958056  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:49.958114  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:50.457640  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:50.957791  398903 type.go:168] "Request Body" body=""
	I1212 20:33:50.957864  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:50.958188  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.458156  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.458244  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.458588  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:51.958381  398903 type.go:168] "Request Body" body=""
	I1212 20:33:51.958464  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:51.958840  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:51.958897  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:52.458422  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.458495  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.458781  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:52.958521  398903 type.go:168] "Request Body" body=""
	I1212 20:33:52.958596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:52.958935  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.457563  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.457641  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.457994  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:53.957675  398903 type.go:168] "Request Body" body=""
	I1212 20:33:53.957749  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:53.958046  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:54.457737  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.457815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.458164  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:54.458229  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:54.957758  398903 type.go:168] "Request Body" body=""
	I1212 20:33:54.957838  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:54.958212  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.457597  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.457673  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:55.958073  398903 type.go:168] "Request Body" body=""
	I1212 20:33:55.958151  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:55.958481  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:56.458356  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.458518  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.458867  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:56.458919  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:56.958475  398903 type.go:168] "Request Body" body=""
	I1212 20:33:56.958546  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:56.958806  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.457573  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.457662  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.458019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:57.957708  398903 type.go:168] "Request Body" body=""
	I1212 20:33:57.957793  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:57.958149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.457519  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.457596  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.457910  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:58.957618  398903 type.go:168] "Request Body" body=""
	I1212 20:33:58.957702  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:58.958029  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:33:58.958086  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:33:59.457639  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.457717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.458079  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:33:59.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:33:59.957695  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:33:59.958025  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.457684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.457770  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.458220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:00.957723  398903 type.go:168] "Request Body" body=""
	I1212 20:34:00.957815  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:00.958152  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:00.958209  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:01.458053  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.458124  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.458397  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:01.958241  398903 type.go:168] "Request Body" body=""
	I1212 20:34:01.958318  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:01.958645  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.458431  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.458517  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.458903  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:02.958515  398903 type.go:168] "Request Body" body=""
	I1212 20:34:02.958593  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:02.958871  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:02.958913  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:03.457571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.457665  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.458014  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:03.957750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:03.957834  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:03.958178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.457755  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.457832  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.458106  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:04.957792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:04.957872  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:04.958222  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:05.457932  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.458011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.458316  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:05.458363  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:05.958224  398903 type.go:168] "Request Body" body=""
	I1212 20:34:05.958347  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:05.958674  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.457554  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.457631  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.457980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:06.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:06.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:06.958087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.457764  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.457837  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.458126  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:07.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:07.957717  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:07.958073  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:07.958131  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:08.457790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.457867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.458190  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:08.957583  398903 type.go:168] "Request Body" body=""
	I1212 20:34:08.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:08.958018  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.457609  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.457690  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.457986  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:09.957661  398903 type.go:168] "Request Body" body=""
	I1212 20:34:09.957735  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:09.958082  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:10.457606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.457682  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.458044  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:10.458120  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:10.957641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:10.957716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:10.958069  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.457925  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.458005  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.458337  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:11.957904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:11.957987  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:11.958273  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.457716  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.458055  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:12.957766  398903 type.go:168] "Request Body" body=""
	I1212 20:34:12.957844  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:12.958153  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:12.958206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:13.457572  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.457652  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.457977  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:13.957665  398903 type.go:168] "Request Body" body=""
	I1212 20:34:13.957752  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:13.958163  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.457645  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.457721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.458033  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:14.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:14.957669  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:14.957980  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:15.457709  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.457800  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.458149  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:15.458206  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:15.957907  398903 type.go:168] "Request Body" body=""
	I1212 20:34:15.958010  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:15.958356  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.458302  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.458374  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.458653  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:16.958451  398903 type.go:168] "Request Body" body=""
	I1212 20:34:16.958529  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:16.958870  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.457647  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.457741  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:17.957571  398903 type.go:168] "Request Body" body=""
	I1212 20:34:17.957648  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:17.958005  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:17.958058  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:18.457731  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.457820  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.458202  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:18.957933  398903 type.go:168] "Request Body" body=""
	I1212 20:34:18.958011  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:18.958346  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.457582  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.457658  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.457973  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:19.957638  398903 type.go:168] "Request Body" body=""
	I1212 20:34:19.957723  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:19.958037  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:19.958084  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:20.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.457726  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.458052  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:20.957756  398903 type.go:168] "Request Body" body=""
	I1212 20:34:20.957830  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:20.958096  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.458059  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.458132  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.458454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:21.958169  398903 type.go:168] "Request Body" body=""
	I1212 20:34:21.958248  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:21.958614  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:21.958670  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:22.458387  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.458456  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.458712  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:22.958495  398903 type.go:168] "Request Body" body=""
	I1212 20:34:22.958574  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:22.958894  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.457621  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.457699  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.458042  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:23.957581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:23.957653  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:23.957931  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:24.457637  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.458068  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:24.458117  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:24.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:24.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:24.958072  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.457596  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.457679  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.458023  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:25.958032  398903 type.go:168] "Request Body" body=""
	I1212 20:34:25.958118  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:25.958454  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:26.458388  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.458463  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.458824  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:26.458879  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:26.958476  398903 type.go:168] "Request Body" body=""
	I1212 20:34:26.958547  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:26.958814  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.458579  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.458656  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.458987  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:27.957727  398903 type.go:168] "Request Body" body=""
	I1212 20:34:27.957802  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:27.958162  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.458439  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.458510  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.458774  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:28.958512  398903 type.go:168] "Request Body" body=""
	I1212 20:34:28.958589  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:28.958911  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:28.958974  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:29.457611  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.457686  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.458020  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:29.957734  398903 type.go:168] "Request Body" body=""
	I1212 20:34:29.957825  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:29.958161  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.457641  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.458083  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:30.957610  398903 type.go:168] "Request Body" body=""
	I1212 20:34:30.957692  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:30.958024  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:31.457903  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.458012  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.458336  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:31.458388  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:31.958144  398903 type.go:168] "Request Body" body=""
	I1212 20:34:31.958227  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:31.958581  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.458466  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.458569  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.458930  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:32.957573  398903 type.go:168] "Request Body" body=""
	I1212 20:34:32.957651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:32.957985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.457644  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.457725  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.458094  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:33.957814  398903 type.go:168] "Request Body" body=""
	I1212 20:34:33.957889  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:33.958221  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:33.958279  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:34.457576  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.457651  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.457968  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:34.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:34.957724  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:34.958077  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.457792  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.457876  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.458181  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:35.958034  398903 type.go:168] "Request Body" body=""
	I1212 20:34:35.958104  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:35.958369  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:35.958411  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:36.458355  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.458432  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.458815  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:36.957543  398903 type.go:168] "Request Body" body=""
	I1212 20:34:36.957626  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:36.957947  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.457604  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.457678  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.457995  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:37.957635  398903 type.go:168] "Request Body" body=""
	I1212 20:34:37.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:37.958039  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:38.457642  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.458116  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:38.458172  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:38.957684  398903 type.go:168] "Request Body" body=""
	I1212 20:34:38.957762  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:38.958062  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.457740  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.458189  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:39.957892  398903 type.go:168] "Request Body" body=""
	I1212 20:34:39.957975  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:39.958305  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.457581  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.457659  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.457974  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:40.957654  398903 type.go:168] "Request Body" body=""
	I1212 20:34:40.957727  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:40.958080  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:40.958134  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:41.457945  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.458029  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.458375  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:41.958149  398903 type.go:168] "Request Body" body=""
	I1212 20:34:41.958218  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:41.958489  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.458344  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.458423  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.458797  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:42.957548  398903 type.go:168] "Request Body" body=""
	I1212 20:34:42.957661  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:42.958002  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:43.457680  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.457765  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.458087  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:43.458139  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:43.957634  398903 type.go:168] "Request Body" body=""
	I1212 20:34:43.957719  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:43.958074  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.457784  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.457863  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.458214  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:44.957493  398903 type.go:168] "Request Body" body=""
	I1212 20:34:44.957567  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:44.957832  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.457549  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.457634  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.457985  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:45.957790  398903 type.go:168] "Request Body" body=""
	I1212 20:34:45.957867  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:45.958220  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:45.958281  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:46.458047  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.458139  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.458408  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:46.958199  398903 type.go:168] "Request Body" body=""
	I1212 20:34:46.958280  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:46.958672  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.458502  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.458578  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.458923  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:47.957598  398903 type.go:168] "Request Body" body=""
	I1212 20:34:47.957667  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:47.958000  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:48.457673  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.457766  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.458114  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:48.458163  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:48.957646  398903 type.go:168] "Request Body" body=""
	I1212 20:34:48.957721  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:48.958063  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.457750  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.457824  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.458132  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:49.957625  398903 type.go:168] "Request Body" body=""
	I1212 20:34:49.957700  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:49.958065  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:50.457775  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.457853  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.458187  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:50.458247  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:50.957570  398903 type.go:168] "Request Body" body=""
	I1212 20:34:50.957642  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:50.957959  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.457904  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.458001  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.458321  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:51.957626  398903 type.go:168] "Request Body" body=""
	I1212 20:34:51.957709  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:51.958019  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.457677  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.457750  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.458071  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:52.957643  398903 type.go:168] "Request Body" body=""
	I1212 20:34:52.957728  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:52.958070  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:52.958126  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:53.457793  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.457868  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.458211  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:53.957606  398903 type.go:168] "Request Body" body=""
	I1212 20:34:53.957688  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:53.958045  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.457738  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.457816  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.458178  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:54.957898  398903 type.go:168] "Request Body" body=""
	I1212 20:34:54.957979  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:54.958335  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	W1212 20:34:54.958392  398903 node_ready.go:55] error getting node "functional-261311" condition "Ready" status (will retry): Get "https://192.168.49.2:8441/api/v1/nodes/functional-261311": dial tcp 192.168.49.2:8441: connect: connection refused
	I1212 20:34:55.457874  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.457957  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.461901  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1212 20:34:55.957753  398903 type.go:168] "Request Body" body=""
	I1212 20:34:55.957835  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:55.958180  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.458205  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.458289  398903 round_trippers.go:527] "Request" verb="GET" url="https://192.168.49.2:8441/api/v1/nodes/functional-261311" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-linux-arm64/v0.0.0 (linux/arm64) kubernetes/$Format
	 >
	I1212 20:34:56.458646  398903 round_trippers.go:632] "Response" status="" headers="" milliseconds=0
	I1212 20:34:56.958289  398903 type.go:168] "Request Body" body=""
	I1212 20:34:56.958348  398903 node_ready.go:38] duration metric: took 6m0.000942014s for node "functional-261311" to be "Ready" ...
	I1212 20:34:56.961249  398903 out.go:203] 
	W1212 20:34:56.963984  398903 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 20:34:56.964005  398903 out.go:285] * 
	W1212 20:34:56.966156  398903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:34:56.969023  398903 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:35:06 functional-261311 crio[5365]: time="2025-12-12T20:35:06.367352643Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=490740ea-6770-4c3b-8f9a-c249ed174965 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.450822832Z" level=info msg="Checking image status: minikube-local-cache-test:functional-261311" id=883d3025-5932-4ce6-ab51-85fab7fe190d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.451041287Z" level=info msg="Resolving \"minikube-local-cache-test\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.451096443Z" level=info msg="Image minikube-local-cache-test:functional-261311 not found" id=883d3025-5932-4ce6-ab51-85fab7fe190d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.451186684Z" level=info msg="Neither image nor artfiact minikube-local-cache-test:functional-261311 found" id=883d3025-5932-4ce6-ab51-85fab7fe190d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.477868512Z" level=info msg="Checking image status: docker.io/library/minikube-local-cache-test:functional-261311" id=749663a1-5e4e-4673-a1c1-e95b9bdcf9b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.478016182Z" level=info msg="Image docker.io/library/minikube-local-cache-test:functional-261311 not found" id=749663a1-5e4e-4673-a1c1-e95b9bdcf9b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.478057478Z" level=info msg="Neither image nor artfiact docker.io/library/minikube-local-cache-test:functional-261311 found" id=749663a1-5e4e-4673-a1c1-e95b9bdcf9b9 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.504304661Z" level=info msg="Checking image status: localhost/library/minikube-local-cache-test:functional-261311" id=be3c042e-7533-4a9a-8ba2-a3667ea82297 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.504481836Z" level=info msg="Image localhost/library/minikube-local-cache-test:functional-261311 not found" id=be3c042e-7533-4a9a-8ba2-a3667ea82297 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:07 functional-261311 crio[5365]: time="2025-12-12T20:35:07.504526735Z" level=info msg="Neither image nor artfiact localhost/library/minikube-local-cache-test:functional-261311 found" id=be3c042e-7533-4a9a-8ba2-a3667ea82297 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.500804411Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=32fd65ec-abb5-48a7-af6b-a0e0059f7b47 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.844040841Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=adbd8d4b-922c-4ca3-93fe-af4324dfaee0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.844230045Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=adbd8d4b-922c-4ca3-93fe-af4324dfaee0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:08 functional-261311 crio[5365]: time="2025-12-12T20:35:08.844289533Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=adbd8d4b-922c-4ca3-93fe-af4324dfaee0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.520024776Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=58aca57d-e555-4df6-85e3-2c89034783c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.520149594Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=58aca57d-e555-4df6-85e3-2c89034783c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.520186829Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=58aca57d-e555-4df6-85e3-2c89034783c6 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.545215858Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=88deabc7-58ff-44bb-ac34-d06fcc945c15 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.545375826Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=88deabc7-58ff-44bb-ac34-d06fcc945c15 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.545416245Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=88deabc7-58ff-44bb-ac34-d06fcc945c15 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.571808184Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=c9b33570-5669-4cae-840b-38259988d85e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.571955123Z" level=info msg="Image registry.k8s.io/pause:latest not found" id=c9b33570-5669-4cae-840b-38259988d85e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:09 functional-261311 crio[5365]: time="2025-12-12T20:35:09.571992432Z" level=info msg="Neither image nor artfiact registry.k8s.io/pause:latest found" id=c9b33570-5669-4cae-840b-38259988d85e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:35:10 functional-261311 crio[5365]: time="2025-12-12T20:35:10.12697813Z" level=info msg="Checking image status: registry.k8s.io/pause:latest" id=b7030a75-60ee-4337-931a-e0927afb9fdf name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:35:14.330497    9539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:14.331121    9539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:14.332897    9539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:14.333515    9539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:35:14.335067    9539 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:35:14 up  3:17,  0 user,  load average: 1.08, 0.48, 0.96
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:35:12 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:12 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1157.
	Dec 12 20:35:12 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:12 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:12 functional-261311 kubelet[9414]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:12 functional-261311 kubelet[9414]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:12 functional-261311 kubelet[9414]: E1212 20:35:12.772171    9414 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:12 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:12 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:13 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1158.
	Dec 12 20:35:13 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:13 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:13 functional-261311 kubelet[9449]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:13 functional-261311 kubelet[9449]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:13 functional-261311 kubelet[9449]: E1212 20:35:13.540636    9449 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:13 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:13 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:35:14 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1159.
	Dec 12 20:35:14 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:14 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:35:14 functional-261311 kubelet[9523]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:14 functional-261311 kubelet[9523]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:35:14 functional-261311 kubelet[9523]: E1212 20:35:14.281480    9523 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:35:14 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:35:14 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (349.520998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (2.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-261311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 20:37:44.064552  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:39:36.832591  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:40:59.903067  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:42:44.061101  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:44:36.832559  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-261311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m12.122002836s)

                                                
                                                
-- stdout --
	* [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000280513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-linux-arm64 start -p functional-261311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m12.123276593s for "functional-261311" cluster.
I1212 20:47:27.536078  364853 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (319.656399ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr                                            │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format table --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ delete         │ -p functional-205528                                                                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ start          │ -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ start          │ -p functional-261311 --alsologtostderr -v=8                                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:28 UTC │                     │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:latest                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add minikube-local-cache-test:functional-261311                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache delete minikube-local-cache-test:functional-261311                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl images                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ cache          │ functional-261311 cache reload                                                                                                                    │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ kubectl        │ functional-261311 kubectl -- --context functional-261311 get pods                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ start          │ -p functional-261311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:35:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:35:15.460416  404800 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:35:15.460537  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.460541  404800 out.go:374] Setting ErrFile to fd 2...
	I1212 20:35:15.460545  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.461281  404800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:35:15.461704  404800 out.go:368] Setting JSON to false
	I1212 20:35:15.462524  404800 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11868,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:35:15.462588  404800 start.go:143] virtualization:  
	I1212 20:35:15.465993  404800 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:35:15.469163  404800 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:35:15.469272  404800 notify.go:221] Checking for updates...
	I1212 20:35:15.475214  404800 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:35:15.478288  404800 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:35:15.481030  404800 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:35:15.483916  404800 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:35:15.486846  404800 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:35:15.490383  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:15.490523  404800 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:35:15.521733  404800 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:35:15.521840  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.586834  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.575092276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.586929  404800 docker.go:319] overlay module found
	I1212 20:35:15.590005  404800 out.go:179] * Using the docker driver based on existing profile
	I1212 20:35:15.592944  404800 start.go:309] selected driver: docker
	I1212 20:35:15.592962  404800 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.593077  404800 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:35:15.593201  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.653530  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.644295166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.653919  404800 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:35:15.653944  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:15.653992  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:15.654035  404800 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.657113  404800 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:35:15.659873  404800 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:35:15.662874  404800 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:35:15.665759  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:15.665839  404800 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:35:15.665900  404800 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:35:15.665919  404800 cache.go:65] Caching tarball of preloaded images
	I1212 20:35:15.666041  404800 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:35:15.666050  404800 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:35:15.666202  404800 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:35:15.685367  404800 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:35:15.685378  404800 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:35:15.685400  404800 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:35:15.685432  404800 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:35:15.685502  404800 start.go:364] duration metric: took 54.475µs to acquireMachinesLock for "functional-261311"
	I1212 20:35:15.685521  404800 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:35:15.685526  404800 fix.go:54] fixHost starting: 
	I1212 20:35:15.685789  404800 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:35:15.703273  404800 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:35:15.703293  404800 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:35:15.712450  404800 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:35:15.712481  404800 machine.go:94] provisionDockerMachine start ...
	I1212 20:35:15.712578  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.736656  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.736977  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.736984  404800 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:35:15.891915  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:15.891929  404800 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:35:15.891999  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.910460  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.910779  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.910787  404800 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:35:16.077690  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:16.077778  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.097025  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.097341  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.097354  404800 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:35:16.252758  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:35:16.252773  404800 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:35:16.252793  404800 ubuntu.go:190] setting up certificates
	I1212 20:35:16.252801  404800 provision.go:84] configureAuth start
	I1212 20:35:16.252918  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:16.270682  404800 provision.go:143] copyHostCerts
	I1212 20:35:16.270755  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:35:16.270763  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:35:16.270834  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:35:16.270926  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:35:16.270930  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:35:16.270953  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:35:16.271010  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:35:16.271014  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:35:16.271036  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:35:16.271079  404800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:35:16.466046  404800 provision.go:177] copyRemoteCerts
	I1212 20:35:16.466103  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:35:16.466141  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.490439  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:16.596331  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:35:16.614499  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:35:16.632168  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:35:16.649948  404800 provision.go:87] duration metric: took 397.124655ms to configureAuth
	I1212 20:35:16.649967  404800 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:35:16.650174  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:16.650275  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.667262  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.667562  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.667574  404800 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:35:17.020390  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:35:17.020403  404800 machine.go:97] duration metric: took 1.307915361s to provisionDockerMachine
	I1212 20:35:17.020413  404800 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:35:17.020431  404800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:35:17.020498  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:35:17.020542  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.039179  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.144817  404800 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:35:17.148499  404800 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:35:17.148517  404800 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:35:17.148528  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:35:17.148587  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:35:17.148671  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:35:17.148745  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:35:17.148790  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:35:17.156874  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:17.175633  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:35:17.193693  404800 start.go:296] duration metric: took 173.265259ms for postStartSetup
	I1212 20:35:17.193768  404800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:35:17.193829  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.212738  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.326054  404800 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:35:17.331128  404800 fix.go:56] duration metric: took 1.64559363s for fixHost
	I1212 20:35:17.331145  404800 start.go:83] releasing machines lock for "functional-261311", held for 1.645635346s
	I1212 20:35:17.331211  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:17.348942  404800 ssh_runner.go:195] Run: cat /version.json
	I1212 20:35:17.348993  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.349240  404800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:35:17.349288  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.377660  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.380423  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.480436  404800 ssh_runner.go:195] Run: systemctl --version
	I1212 20:35:17.572826  404800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:35:17.610243  404800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:35:17.614893  404800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:35:17.614954  404800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:35:17.623289  404800 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:35:17.623303  404800 start.go:496] detecting cgroup driver to use...
	I1212 20:35:17.623333  404800 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:35:17.623377  404800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:35:17.638845  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:35:17.652624  404800 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:35:17.652690  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:35:17.668971  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:35:17.682562  404800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:35:17.807109  404800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:35:17.921667  404800 docker.go:234] disabling docker service ...
	I1212 20:35:17.921741  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:35:17.940321  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:35:17.957092  404800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:35:18.087741  404800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:35:18.206163  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:35:18.219734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:35:18.233813  404800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:35:18.233881  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.242826  404800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:35:18.242900  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.252023  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.261290  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.270163  404800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:35:18.278452  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.287612  404800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.296129  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.305360  404800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:35:18.313008  404800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:35:18.320507  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:18.433496  404800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:35:18.624476  404800 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:35:18.624545  404800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:35:18.628455  404800 start.go:564] Will wait 60s for crictl version
	I1212 20:35:18.628509  404800 ssh_runner.go:195] Run: which crictl
	I1212 20:35:18.631901  404800 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:35:18.657967  404800 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:35:18.658043  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.686054  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.728907  404800 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:35:18.731836  404800 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:35:18.758101  404800 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:35:18.765430  404800 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:35:18.768359  404800 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:35:18.768498  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:18.768569  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.809159  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.809172  404800 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:35:18.809226  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.835786  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.835798  404800 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:35:18.835804  404800 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:35:18.835897  404800 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:35:18.835978  404800 ssh_runner.go:195] Run: crio config
	I1212 20:35:18.911975  404800 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:35:18.911996  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:18.912005  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:18.912021  404800 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:35:18.912048  404800 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:35:18.912174  404800 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:35:18.912242  404800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:35:18.919878  404800 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:35:18.919945  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:35:18.927506  404800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:35:18.940260  404800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:35:18.953546  404800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 20:35:18.966154  404800 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:35:18.969878  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:19.088694  404800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:35:19.456785  404800 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:35:19.456797  404800 certs.go:195] generating shared ca certs ...
	I1212 20:35:19.456811  404800 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:35:19.457015  404800 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:35:19.457061  404800 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:35:19.457083  404800 certs.go:257] generating profile certs ...
	I1212 20:35:19.457188  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:35:19.457266  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:35:19.457320  404800 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:35:19.457484  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:35:19.457522  404800 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:35:19.457530  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:35:19.457572  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:35:19.457613  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:35:19.457656  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:35:19.457720  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:19.458537  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:35:19.481387  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:35:19.503914  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:35:19.527911  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:35:19.547817  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:35:19.567001  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:35:19.585411  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:35:19.603199  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:35:19.621415  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:35:19.639746  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:35:19.657747  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:35:19.675414  404800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:35:19.688797  404800 ssh_runner.go:195] Run: openssl version
	I1212 20:35:19.695324  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.703181  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:35:19.710800  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714682  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714738  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.755943  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:35:19.764525  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.772260  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:35:19.780093  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783725  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783778  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.825039  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:35:19.832411  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.839917  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:35:19.847683  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851494  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851551  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.892840  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:35:19.900611  404800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:35:19.904415  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:35:19.945816  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:35:19.987206  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:35:20.028949  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:35:20.071640  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:35:20.114011  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:35:20.155956  404800 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:20.156040  404800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:35:20.156106  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.185271  404800 cri.go:89] found id: ""
	I1212 20:35:20.185335  404800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:35:20.193716  404800 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:35:20.193726  404800 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:35:20.193778  404800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:35:20.201404  404800 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.201928  404800 kubeconfig.go:125] found "functional-261311" server: "https://192.168.49.2:8441"
	I1212 20:35:20.203285  404800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:35:20.213068  404800 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 20:20:42.746943766 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:35:18.963900938 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:35:20.213088  404800 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:35:20.213099  404800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 20:35:20.213154  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.242899  404800 cri.go:89] found id: ""
	I1212 20:35:20.242960  404800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:35:20.261588  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:35:20.270004  404800 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 20:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 20:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 20:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 20:24 /etc/kubernetes/scheduler.conf
	
	I1212 20:35:20.270062  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:35:20.278110  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:35:20.285789  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.285844  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:35:20.293376  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.301132  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.301185  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.309065  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:35:20.316914  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.316967  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:35:20.324673  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:35:20.332520  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:20.381164  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.740495  404800 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.359307117s)
	I1212 20:35:21.740554  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.936349  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.006437  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.060809  404800 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:35:22.060899  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:22.561081  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.062037  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.561673  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.061283  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.561690  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.061084  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.561740  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.061753  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.561615  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.061476  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.561193  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.061088  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.561754  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.061218  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.561124  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.061364  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.561503  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.061616  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.561042  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.061002  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.561635  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.561100  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.061640  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.562032  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.061030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.561966  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.061881  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.561895  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.061604  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.062060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.061118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.561000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.061043  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.561911  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.061748  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.561627  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.561174  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.061190  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.561060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.061057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.561587  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.561122  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.061055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.561141  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.061107  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.560994  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.062000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.561057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.061151  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.561089  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.061007  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.561745  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.061094  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.561413  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.061652  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.561706  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.061685  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.561118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.061047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.561109  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.061626  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.561543  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.061374  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.561047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.062047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.561053  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.061760  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.561015  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.561602  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.061050  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.565101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.061738  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.561016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.061584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.561705  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.062021  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.561146  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.061266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.061786  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.561910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.062016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.561621  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.061104  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.561077  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.061034  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.561076  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.061095  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.062030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.561403  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.061217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.561772  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.061561  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.561252  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.061001  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.561813  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.061556  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.061061  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.561415  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.061155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.061682  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.561217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.061108  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.561055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.061653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.561105  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.061064  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.561836  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.061167  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.561650  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:22.061836  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:22.061921  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:22.088621  404800 cri.go:89] found id: ""
	I1212 20:36:22.088636  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.088643  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:22.088648  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:22.088710  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:22.115845  404800 cri.go:89] found id: ""
	I1212 20:36:22.115860  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.115867  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:22.115872  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:22.115934  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:22.145607  404800 cri.go:89] found id: ""
	I1212 20:36:22.145622  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.145629  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:22.145634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:22.145694  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:22.175762  404800 cri.go:89] found id: ""
	I1212 20:36:22.175782  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.175790  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:22.175795  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:22.175852  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:22.205262  404800 cri.go:89] found id: ""
	I1212 20:36:22.205277  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.205283  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:22.205288  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:22.205343  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:22.240968  404800 cri.go:89] found id: ""
	I1212 20:36:22.240981  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.240988  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:22.240993  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:22.241050  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:22.272662  404800 cri.go:89] found id: ""
	I1212 20:36:22.272676  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.272683  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:22.272691  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:22.272700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:22.301824  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:22.301841  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:22.370470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:22.370488  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:22.385289  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:22.385306  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:22.449648  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:22.449659  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:22.449670  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.019320  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:25.030277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:25.030345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:25.060950  404800 cri.go:89] found id: ""
	I1212 20:36:25.060975  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.060982  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:25.060988  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:25.061049  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:25.087641  404800 cri.go:89] found id: ""
	I1212 20:36:25.087663  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.087670  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:25.087675  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:25.087735  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:25.114870  404800 cri.go:89] found id: ""
	I1212 20:36:25.114885  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.114893  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:25.114899  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:25.114963  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:25.140642  404800 cri.go:89] found id: ""
	I1212 20:36:25.140664  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.140671  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:25.140677  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:25.140736  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:25.166644  404800 cri.go:89] found id: ""
	I1212 20:36:25.166658  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.166665  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:25.166671  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:25.166731  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:25.192547  404800 cri.go:89] found id: ""
	I1212 20:36:25.192561  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.192567  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:25.192572  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:25.192635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:25.231874  404800 cri.go:89] found id: ""
	I1212 20:36:25.231889  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.231895  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:25.231903  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:25.231914  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:25.315537  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:25.315559  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:25.330635  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:25.330654  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:25.395220  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:25.395260  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:25.395272  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.467585  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:25.467605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:27.999765  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:28.012318  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:28.012406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:28.038452  404800 cri.go:89] found id: ""
	I1212 20:36:28.038467  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.038475  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:28.038481  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:28.038550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:28.065565  404800 cri.go:89] found id: ""
	I1212 20:36:28.065579  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.065586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:28.065591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:28.065652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:28.091553  404800 cri.go:89] found id: ""
	I1212 20:36:28.091574  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.091581  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:28.091587  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:28.091651  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:28.117664  404800 cri.go:89] found id: ""
	I1212 20:36:28.117677  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.117684  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:28.117689  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:28.117747  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:28.143314  404800 cri.go:89] found id: ""
	I1212 20:36:28.143328  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.143335  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:28.143339  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:28.143396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:28.170365  404800 cri.go:89] found id: ""
	I1212 20:36:28.170379  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.170386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:28.170391  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:28.170450  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:28.194993  404800 cri.go:89] found id: ""
	I1212 20:36:28.195013  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.195019  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:28.195027  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:28.195037  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:28.264144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:28.264163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:28.294480  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:28.294497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:28.364064  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:28.364087  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:28.378788  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:28.378811  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:28.443238  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:30.944182  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:30.954580  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:30.954652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:30.981452  404800 cri.go:89] found id: ""
	I1212 20:36:30.981467  404800 logs.go:282] 0 containers: []
	W1212 20:36:30.981474  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:30.981479  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:30.981543  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:31.009852  404800 cri.go:89] found id: ""
	I1212 20:36:31.009868  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.009875  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:31.009881  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:31.009949  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:31.041648  404800 cri.go:89] found id: ""
	I1212 20:36:31.041664  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.041671  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:31.041676  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:31.041741  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:31.071159  404800 cri.go:89] found id: ""
	I1212 20:36:31.071194  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.071203  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:31.071208  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:31.071274  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:31.101318  404800 cri.go:89] found id: ""
	I1212 20:36:31.101333  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.101340  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:31.101345  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:31.101407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:31.128905  404800 cri.go:89] found id: ""
	I1212 20:36:31.128921  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.128937  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:31.128943  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:31.129019  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:31.156884  404800 cri.go:89] found id: ""
	I1212 20:36:31.156899  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.156906  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:31.156914  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:31.156924  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:31.229169  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:31.229188  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:31.244638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:31.244655  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:31.316835  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:31.316848  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:31.316866  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:31.386236  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:31.386258  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:33.917579  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:33.927716  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:33.927782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:33.952915  404800 cri.go:89] found id: ""
	I1212 20:36:33.952929  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.952936  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:33.952941  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:33.952998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:33.986667  404800 cri.go:89] found id: ""
	I1212 20:36:33.986681  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.986688  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:33.986693  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:33.986753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:34.017351  404800 cri.go:89] found id: ""
	I1212 20:36:34.017367  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.017374  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:34.017379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:34.017459  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:34.044495  404800 cri.go:89] found id: ""
	I1212 20:36:34.044509  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.044517  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:34.044522  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:34.044579  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:34.070939  404800 cri.go:89] found id: ""
	I1212 20:36:34.070953  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.070960  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:34.070964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:34.071022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:34.099384  404800 cri.go:89] found id: ""
	I1212 20:36:34.099398  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.099405  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:34.099411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:34.099469  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:34.125342  404800 cri.go:89] found id: ""
	I1212 20:36:34.125357  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.125364  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:34.125372  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:34.125383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:34.195370  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:34.195391  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:34.212114  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:34.212130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:34.294767  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:34.294788  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:34.294798  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:34.365333  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:34.365354  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:36.899244  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:36.909418  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:36.909481  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:36.934188  404800 cri.go:89] found id: ""
	I1212 20:36:36.934202  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.934219  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:36.934224  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:36.934281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:36.959806  404800 cri.go:89] found id: ""
	I1212 20:36:36.959821  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.959828  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:36.959832  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:36.959898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:36.986148  404800 cri.go:89] found id: ""
	I1212 20:36:36.986162  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.986169  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:36.986174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:36.986231  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:37.017876  404800 cri.go:89] found id: ""
	I1212 20:36:37.017892  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.017899  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:37.017905  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:37.017971  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:37.047901  404800 cri.go:89] found id: ""
	I1212 20:36:37.047915  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.047921  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:37.047926  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:37.047985  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:37.076531  404800 cri.go:89] found id: ""
	I1212 20:36:37.076546  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.076553  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:37.076558  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:37.076615  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:37.102846  404800 cri.go:89] found id: ""
	I1212 20:36:37.102870  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.102877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:37.102885  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:37.102896  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:37.134007  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:37.134024  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:37.207327  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:37.207352  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:37.222638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:37.222657  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:37.290385  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:37.290395  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:37.290406  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:39.860964  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:39.871500  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:39.871558  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:39.898740  404800 cri.go:89] found id: ""
	I1212 20:36:39.898755  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.898762  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:39.898767  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:39.898830  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:39.925154  404800 cri.go:89] found id: ""
	I1212 20:36:39.925168  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.925175  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:39.925180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:39.925239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:39.950208  404800 cri.go:89] found id: ""
	I1212 20:36:39.950223  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.950229  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:39.950234  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:39.950297  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:39.976836  404800 cri.go:89] found id: ""
	I1212 20:36:39.976851  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.976857  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:39.976863  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:39.976936  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:40.009665  404800 cri.go:89] found id: ""
	I1212 20:36:40.009695  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.010153  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:40.010168  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:40.010262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:40.067797  404800 cri.go:89] found id: ""
	I1212 20:36:40.067813  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.067838  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:40.067844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:40.067922  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:40.103262  404800 cri.go:89] found id: ""
	I1212 20:36:40.103277  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.103287  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:40.103295  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:40.103308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:40.119554  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:40.119573  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:40.195337  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:40.195364  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:40.195376  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:40.270010  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:40.270029  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:40.299631  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:40.299652  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:42.866117  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:42.876408  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:42.876467  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:42.901308  404800 cri.go:89] found id: ""
	I1212 20:36:42.901321  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.901328  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:42.901333  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:42.901396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:42.925954  404800 cri.go:89] found id: ""
	I1212 20:36:42.925968  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.925975  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:42.925980  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:42.926041  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:42.951209  404800 cri.go:89] found id: ""
	I1212 20:36:42.951224  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.951231  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:42.951236  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:42.951296  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:42.977995  404800 cri.go:89] found id: ""
	I1212 20:36:42.978010  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.978017  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:42.978022  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:42.978082  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:43.004860  404800 cri.go:89] found id: ""
	I1212 20:36:43.004875  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.004892  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:43.004898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:43.004973  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:43.040400  404800 cri.go:89] found id: ""
	I1212 20:36:43.040414  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.040421  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:43.040427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:43.040485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:43.068090  404800 cri.go:89] found id: ""
	I1212 20:36:43.068104  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.068122  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:43.068130  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:43.068144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:43.140175  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:43.140195  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:43.154957  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:43.154976  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:43.225443  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:43.225462  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:43.225473  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:43.307152  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:43.307175  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:45.837432  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:45.847721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:45.847783  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:45.874064  404800 cri.go:89] found id: ""
	I1212 20:36:45.874118  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.874125  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:45.874131  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:45.874197  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:45.902655  404800 cri.go:89] found id: ""
	I1212 20:36:45.902669  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.902676  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:45.902681  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:45.902739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:45.929017  404800 cri.go:89] found id: ""
	I1212 20:36:45.929031  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.929044  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:45.929050  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:45.929118  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:45.958749  404800 cri.go:89] found id: ""
	I1212 20:36:45.958763  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.958770  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:45.958776  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:45.958837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:45.989217  404800 cri.go:89] found id: ""
	I1212 20:36:45.989239  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.989246  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:45.989252  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:45.989317  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:46.017594  404800 cri.go:89] found id: ""
	I1212 20:36:46.017609  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.017616  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:46.017621  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:46.017681  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:46.047594  404800 cri.go:89] found id: ""
	I1212 20:36:46.047619  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.047628  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:46.047636  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:46.047647  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:46.113115  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:46.113137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:46.128309  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:46.128328  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:46.195035  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:46.195044  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:46.195054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:46.268896  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:46.268917  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:48.800382  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:48.810496  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:48.810556  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:48.835685  404800 cri.go:89] found id: ""
	I1212 20:36:48.835699  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.835706  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:48.835712  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:48.835772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:48.864872  404800 cri.go:89] found id: ""
	I1212 20:36:48.864892  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.864899  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:48.864904  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:48.864969  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:48.889491  404800 cri.go:89] found id: ""
	I1212 20:36:48.889505  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.889512  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:48.889517  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:48.889577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:48.914454  404800 cri.go:89] found id: ""
	I1212 20:36:48.914468  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.914474  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:48.914480  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:48.914533  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:48.938478  404800 cri.go:89] found id: ""
	I1212 20:36:48.938492  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.938499  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:48.938504  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:48.938570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:48.964129  404800 cri.go:89] found id: ""
	I1212 20:36:48.964143  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.964151  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:48.964156  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:48.964221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:48.989666  404800 cri.go:89] found id: ""
	I1212 20:36:48.989680  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.989687  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:48.989695  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:48.989705  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:49.063089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:49.063110  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:49.095579  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:49.095596  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:49.163720  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:49.163740  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:49.178328  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:49.178344  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:49.260325  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:51.761045  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:51.771641  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:51.771702  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:51.797458  404800 cri.go:89] found id: ""
	I1212 20:36:51.797472  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.797479  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:51.797484  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:51.797541  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:51.823244  404800 cri.go:89] found id: ""
	I1212 20:36:51.823268  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.823274  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:51.823279  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:51.823346  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:51.848495  404800 cri.go:89] found id: ""
	I1212 20:36:51.848509  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.848516  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:51.848520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:51.848580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:51.873152  404800 cri.go:89] found id: ""
	I1212 20:36:51.873168  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.873175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:51.873180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:51.873238  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:51.898283  404800 cri.go:89] found id: ""
	I1212 20:36:51.898297  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.898305  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:51.898310  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:51.898370  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:51.924343  404800 cri.go:89] found id: ""
	I1212 20:36:51.924358  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.924386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:51.924392  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:51.924455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:51.949330  404800 cri.go:89] found id: ""
	I1212 20:36:51.949345  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.949352  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:51.949359  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:51.949371  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:52.016304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:52.016326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:52.032963  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:52.032980  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:52.109987  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:52.109999  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:52.110012  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:52.180144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:52.180164  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:54.720069  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:54.730740  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:54.730803  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:54.758017  404800 cri.go:89] found id: ""
	I1212 20:36:54.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.758038  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:54.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:54.758105  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:54.790190  404800 cri.go:89] found id: ""
	I1212 20:36:54.790210  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.790217  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:54.790222  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:54.790281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:54.819974  404800 cri.go:89] found id: ""
	I1212 20:36:54.819989  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.819996  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:54.820001  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:54.820065  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:54.847251  404800 cri.go:89] found id: ""
	I1212 20:36:54.847265  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.847272  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:54.847277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:54.847342  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:54.873168  404800 cri.go:89] found id: ""
	I1212 20:36:54.873182  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.873190  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:54.873195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:54.873262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:54.898145  404800 cri.go:89] found id: ""
	I1212 20:36:54.898160  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.898167  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:54.898175  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:54.898237  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:54.924123  404800 cri.go:89] found id: ""
	I1212 20:36:54.924146  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.924155  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:54.924163  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:54.924173  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:54.989756  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:54.989775  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:55.021117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:55.021137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:55.090802  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:55.090816  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:55.090828  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:55.164266  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:55.164287  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:57.696458  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:57.706599  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:57.706656  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:57.732396  404800 cri.go:89] found id: ""
	I1212 20:36:57.732410  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.732420  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:57.732425  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:57.732485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:57.758017  404800 cri.go:89] found id: ""
	I1212 20:36:57.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.758039  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:57.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:57.758100  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:57.784957  404800 cri.go:89] found id: ""
	I1212 20:36:57.784971  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.784978  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:57.784983  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:57.785044  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:57.810973  404800 cri.go:89] found id: ""
	I1212 20:36:57.810986  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.810993  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:57.810999  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:57.811054  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:57.837384  404800 cri.go:89] found id: ""
	I1212 20:36:57.837398  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.837406  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:57.837411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:57.837487  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:57.863576  404800 cri.go:89] found id: ""
	I1212 20:36:57.863598  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.863605  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:57.863610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:57.863676  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:57.889215  404800 cri.go:89] found id: ""
	I1212 20:36:57.889236  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.889244  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:57.889252  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:57.889263  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:57.956054  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:57.956076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:57.970574  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:57.970590  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:58.038134  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:58.038144  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:58.038160  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:58.109516  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:58.109541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:00.640789  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:00.651136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:00.651196  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:00.678187  404800 cri.go:89] found id: ""
	I1212 20:37:00.678202  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.678209  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:00.678215  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:00.678275  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:00.703384  404800 cri.go:89] found id: ""
	I1212 20:37:00.703400  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.703407  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:00.703412  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:00.703474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:00.735999  404800 cri.go:89] found id: ""
	I1212 20:37:00.736013  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.736020  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:00.736025  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:00.736083  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:00.762232  404800 cri.go:89] found id: ""
	I1212 20:37:00.762246  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.762253  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:00.762258  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:00.762314  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:00.788575  404800 cri.go:89] found id: ""
	I1212 20:37:00.788589  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.788596  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:00.788601  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:00.788663  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:00.815050  404800 cri.go:89] found id: ""
	I1212 20:37:00.815065  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.815081  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:00.815087  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:00.815146  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:00.840166  404800 cri.go:89] found id: ""
	I1212 20:37:00.840180  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.840196  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:00.840205  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:00.840216  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:00.905766  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:00.905787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:00.920612  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:00.920631  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:00.987903  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:00.987914  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:00.987926  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:01.058125  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:01.058146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.588584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:03.599133  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:03.599202  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:03.629322  404800 cri.go:89] found id: ""
	I1212 20:37:03.629336  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.629343  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:03.629348  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:03.629410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:03.654415  404800 cri.go:89] found id: ""
	I1212 20:37:03.654429  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.654436  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:03.654443  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:03.654499  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:03.679922  404800 cri.go:89] found id: ""
	I1212 20:37:03.679937  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.679944  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:03.679950  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:03.680015  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:03.706619  404800 cri.go:89] found id: ""
	I1212 20:37:03.706634  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.706640  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:03.706646  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:03.706707  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:03.733101  404800 cri.go:89] found id: ""
	I1212 20:37:03.733116  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.733123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:03.733128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:03.733189  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:03.758431  404800 cri.go:89] found id: ""
	I1212 20:37:03.758445  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.758452  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:03.758457  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:03.758520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:03.789138  404800 cri.go:89] found id: ""
	I1212 20:37:03.789152  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.789159  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:03.789166  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:03.789177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:03.852394  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:03.852404  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:03.852415  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:03.921263  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:03.921283  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.950006  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:03.950022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:04.020715  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:04.020739  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.536553  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:06.547113  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:06.547176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:06.575862  404800 cri.go:89] found id: ""
	I1212 20:37:06.575876  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.575883  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:06.575888  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:06.575947  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:06.601781  404800 cri.go:89] found id: ""
	I1212 20:37:06.601796  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.601803  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:06.601808  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:06.601868  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:06.627486  404800 cri.go:89] found id: ""
	I1212 20:37:06.627500  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.627507  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:06.627520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:06.627577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:06.656432  404800 cri.go:89] found id: ""
	I1212 20:37:06.656446  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.656454  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:06.656465  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:06.656526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:06.681705  404800 cri.go:89] found id: ""
	I1212 20:37:06.681719  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.681726  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:06.681731  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:06.681794  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:06.707068  404800 cri.go:89] found id: ""
	I1212 20:37:06.707083  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.707090  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:06.707095  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:06.707157  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:06.734286  404800 cri.go:89] found id: ""
	I1212 20:37:06.734300  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.734307  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:06.734314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:06.734324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:06.799595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:06.799616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.814521  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:06.814543  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:06.881453  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:06.881463  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:06.881474  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:06.950345  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:06.950365  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:09.488970  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:09.500875  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:09.500940  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:09.529418  404800 cri.go:89] found id: ""
	I1212 20:37:09.529433  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.529439  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:09.529445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:09.529505  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:09.559685  404800 cri.go:89] found id: ""
	I1212 20:37:09.559700  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.559707  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:09.559712  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:09.559772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:09.587781  404800 cri.go:89] found id: ""
	I1212 20:37:09.587796  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.587802  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:09.587807  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:09.587869  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:09.613804  404800 cri.go:89] found id: ""
	I1212 20:37:09.613820  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.613826  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:09.613832  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:09.613903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:09.639550  404800 cri.go:89] found id: ""
	I1212 20:37:09.639566  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.639573  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:09.639578  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:09.639644  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:09.669938  404800 cri.go:89] found id: ""
	I1212 20:37:09.669953  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.669960  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:09.669965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:09.670025  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:09.696771  404800 cri.go:89] found id: ""
	I1212 20:37:09.696785  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.696799  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:09.696807  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:09.696818  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:09.763319  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:09.763340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:09.778782  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:09.778799  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:09.846376  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:09.846385  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:09.846396  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:09.917476  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:09.917497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.447817  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:12.457978  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:12.458042  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:12.491473  404800 cri.go:89] found id: ""
	I1212 20:37:12.491487  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.491495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:12.491500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:12.491559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:12.522865  404800 cri.go:89] found id: ""
	I1212 20:37:12.522881  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.522888  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:12.522892  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:12.522959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:12.548498  404800 cri.go:89] found id: ""
	I1212 20:37:12.548514  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.548521  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:12.548526  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:12.548592  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:12.579700  404800 cri.go:89] found id: ""
	I1212 20:37:12.579714  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.579721  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:12.579726  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:12.579791  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:12.606849  404800 cri.go:89] found id: ""
	I1212 20:37:12.606863  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.606870  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:12.606878  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:12.606942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:12.632352  404800 cri.go:89] found id: ""
	I1212 20:37:12.632386  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.632394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:12.632400  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:12.632464  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:12.657776  404800 cri.go:89] found id: ""
	I1212 20:37:12.657791  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.657798  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:12.657805  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:12.657816  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:12.672067  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:12.672083  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:12.744080  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:12.744093  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:12.744103  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:12.811395  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:12.811414  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.839843  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:12.839862  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.405601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:15.417051  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:15.417110  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:15.442503  404800 cri.go:89] found id: ""
	I1212 20:37:15.442517  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.442524  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:15.442530  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:15.442588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:15.483736  404800 cri.go:89] found id: ""
	I1212 20:37:15.483763  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.483770  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:15.483775  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:15.483843  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:15.515671  404800 cri.go:89] found id: ""
	I1212 20:37:15.515685  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.515692  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:15.515697  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:15.515764  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:15.548136  404800 cri.go:89] found id: ""
	I1212 20:37:15.548151  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.548158  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:15.548163  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:15.548221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:15.576936  404800 cri.go:89] found id: ""
	I1212 20:37:15.576951  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.576958  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:15.576962  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:15.577022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:15.603608  404800 cri.go:89] found id: ""
	I1212 20:37:15.603622  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.603629  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:15.603634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:15.603689  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:15.638105  404800 cri.go:89] found id: ""
	I1212 20:37:15.638125  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.638133  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:15.638140  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:15.638150  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.708493  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:15.708513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:15.723827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:15.723851  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:15.792302  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:15.792314  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:15.792326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:15.860772  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:15.860796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:18.397462  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:18.407317  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:18.407382  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:18.433353  404800 cri.go:89] found id: ""
	I1212 20:37:18.433368  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.433375  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:18.433379  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:18.433435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:18.465547  404800 cri.go:89] found id: ""
	I1212 20:37:18.465561  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.465568  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:18.465572  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:18.465629  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:18.498811  404800 cri.go:89] found id: ""
	I1212 20:37:18.498825  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.498832  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:18.498837  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:18.498894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:18.525729  404800 cri.go:89] found id: ""
	I1212 20:37:18.525745  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.525752  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:18.525758  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:18.525820  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:18.555807  404800 cri.go:89] found id: ""
	I1212 20:37:18.555822  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.555829  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:18.555834  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:18.555890  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:18.586968  404800 cri.go:89] found id: ""
	I1212 20:37:18.586982  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.586989  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:18.586994  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:18.587048  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:18.613654  404800 cri.go:89] found id: ""
	I1212 20:37:18.613668  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.613675  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:18.613683  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:18.613694  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:18.685435  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:18.685464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:18.701543  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:18.701560  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:18.771148  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:18.771159  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:18.771169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:18.840302  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:18.840324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.370649  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:21.380730  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:21.380785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:21.407262  404800 cri.go:89] found id: ""
	I1212 20:37:21.407277  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.407285  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:21.407290  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:21.407353  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:21.431725  404800 cri.go:89] found id: ""
	I1212 20:37:21.431741  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.431748  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:21.431753  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:21.431808  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:21.462830  404800 cri.go:89] found id: ""
	I1212 20:37:21.462844  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.462851  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:21.462856  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:21.462914  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:21.490038  404800 cri.go:89] found id: ""
	I1212 20:37:21.490053  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.490060  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:21.490066  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:21.490123  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:21.522135  404800 cri.go:89] found id: ""
	I1212 20:37:21.522152  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.522165  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:21.522170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:21.522243  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:21.550272  404800 cri.go:89] found id: ""
	I1212 20:37:21.550286  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.550293  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:21.550298  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:21.550352  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:21.575855  404800 cri.go:89] found id: ""
	I1212 20:37:21.575868  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.575875  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:21.575882  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:21.575892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:21.643213  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:21.643234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.676057  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:21.676076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:21.746870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:21.746890  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:21.762368  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:21.762383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:21.829472  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.331150  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:24.341451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:24.341509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:24.365339  404800 cri.go:89] found id: ""
	I1212 20:37:24.365354  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.365362  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:24.365367  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:24.365430  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:24.392822  404800 cri.go:89] found id: ""
	I1212 20:37:24.392837  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.392844  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:24.392849  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:24.392941  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:24.419333  404800 cri.go:89] found id: ""
	I1212 20:37:24.419347  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.419354  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:24.419365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:24.419422  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:24.444927  404800 cri.go:89] found id: ""
	I1212 20:37:24.444940  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.444947  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:24.444952  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:24.445014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:24.479382  404800 cri.go:89] found id: ""
	I1212 20:37:24.479411  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.479422  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:24.479427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:24.479496  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:24.519373  404800 cri.go:89] found id: ""
	I1212 20:37:24.519387  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.519394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:24.519399  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:24.519458  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:24.546714  404800 cri.go:89] found id: ""
	I1212 20:37:24.546729  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.546736  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:24.546744  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:24.546755  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:24.612546  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:24.612568  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:24.627419  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:24.627435  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:24.695735  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.695745  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:24.695757  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:24.764903  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:24.764929  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:27.295998  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:27.306158  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:27.306222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:27.331510  404800 cri.go:89] found id: ""
	I1212 20:37:27.331524  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.331532  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:27.331549  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:27.331608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:27.357120  404800 cri.go:89] found id: ""
	I1212 20:37:27.357134  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.357141  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:27.357146  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:27.357227  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:27.383390  404800 cri.go:89] found id: ""
	I1212 20:37:27.383404  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.383411  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:27.383416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:27.383471  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:27.408672  404800 cri.go:89] found id: ""
	I1212 20:37:27.408687  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.408695  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:27.408699  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:27.408758  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:27.434453  404800 cri.go:89] found id: ""
	I1212 20:37:27.434467  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.434478  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:27.434483  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:27.434542  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:27.467590  404800 cri.go:89] found id: ""
	I1212 20:37:27.467603  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.467610  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:27.467615  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:27.467672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:27.501872  404800 cri.go:89] found id: ""
	I1212 20:37:27.501886  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.501893  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:27.501900  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:27.501912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:27.574950  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:27.574971  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:27.590147  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:27.590163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:27.659572  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:27.659583  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:27.659594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:27.728089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:27.728111  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.260552  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:30.272906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:30.272984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:30.302879  404800 cri.go:89] found id: ""
	I1212 20:37:30.302903  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.302911  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:30.302916  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:30.302993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:30.332792  404800 cri.go:89] found id: ""
	I1212 20:37:30.332807  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.332814  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:30.332819  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:30.332877  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:30.359283  404800 cri.go:89] found id: ""
	I1212 20:37:30.359298  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.359306  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:30.359311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:30.359369  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:30.385609  404800 cri.go:89] found id: ""
	I1212 20:37:30.385624  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.385643  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:30.385649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:30.385709  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:30.410328  404800 cri.go:89] found id: ""
	I1212 20:37:30.410343  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.410358  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:30.410362  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:30.410423  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:30.435005  404800 cri.go:89] found id: ""
	I1212 20:37:30.435019  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.435026  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:30.435031  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:30.435089  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:30.474088  404800 cri.go:89] found id: ""
	I1212 20:37:30.474102  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.474109  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:30.474116  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:30.474127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.508894  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:30.508918  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:30.583876  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:30.583895  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:30.599205  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:30.599229  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:30.667713  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:30.667723  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:30.667749  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.236428  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:33.246549  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:33.246607  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:33.272236  404800 cri.go:89] found id: ""
	I1212 20:37:33.272250  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.272257  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:33.272262  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:33.272324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:33.297982  404800 cri.go:89] found id: ""
	I1212 20:37:33.297997  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.298004  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:33.298009  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:33.298068  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:33.324170  404800 cri.go:89] found id: ""
	I1212 20:37:33.324183  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.324190  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:33.324195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:33.324252  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:33.350869  404800 cri.go:89] found id: ""
	I1212 20:37:33.350883  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.350890  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:33.350895  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:33.350950  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:33.376336  404800 cri.go:89] found id: ""
	I1212 20:37:33.376352  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.376360  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:33.376384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:33.376446  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:33.402358  404800 cri.go:89] found id: ""
	I1212 20:37:33.402371  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.402378  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:33.402384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:33.402444  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:33.428067  404800 cri.go:89] found id: ""
	I1212 20:37:33.428081  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.428088  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:33.428104  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:33.428114  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.498721  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:33.498744  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:33.532343  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:33.532362  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:33.601583  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:33.601603  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:33.616929  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:33.616947  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:33.680299  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.180540  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:36.191300  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:36.191360  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:36.219483  404800 cri.go:89] found id: ""
	I1212 20:37:36.219498  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.219505  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:36.219511  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:36.219569  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:36.246240  404800 cri.go:89] found id: ""
	I1212 20:37:36.246255  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.246262  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:36.246267  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:36.246326  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:36.272949  404800 cri.go:89] found id: ""
	I1212 20:37:36.272962  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.272969  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:36.272975  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:36.273038  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:36.298716  404800 cri.go:89] found id: ""
	I1212 20:37:36.298731  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.298738  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:36.298743  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:36.298798  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:36.325228  404800 cri.go:89] found id: ""
	I1212 20:37:36.325242  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.325249  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:36.325254  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:36.325312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:36.350322  404800 cri.go:89] found id: ""
	I1212 20:37:36.350337  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.350344  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:36.350350  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:36.350406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:36.380083  404800 cri.go:89] found id: ""
	I1212 20:37:36.380097  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.380104  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:36.380117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:36.380128  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:36.442887  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.442899  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:36.442910  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:36.514571  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:36.514592  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:36.549020  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:36.549036  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:36.615002  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:36.615023  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.129960  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:39.139842  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:39.139903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:39.164988  404800 cri.go:89] found id: ""
	I1212 20:37:39.165003  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.165010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:39.165014  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:39.165072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:39.195151  404800 cri.go:89] found id: ""
	I1212 20:37:39.195166  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.195172  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:39.195177  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:39.195235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:39.223301  404800 cri.go:89] found id: ""
	I1212 20:37:39.223315  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.223322  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:39.223327  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:39.223384  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:39.248078  404800 cri.go:89] found id: ""
	I1212 20:37:39.248093  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.248100  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:39.248105  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:39.248162  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:39.272363  404800 cri.go:89] found id: ""
	I1212 20:37:39.272403  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.272411  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:39.272415  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:39.272474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:39.297353  404800 cri.go:89] found id: ""
	I1212 20:37:39.297367  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.297374  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:39.297379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:39.297437  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:39.322842  404800 cri.go:89] found id: ""
	I1212 20:37:39.322855  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.322863  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:39.322870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:39.322881  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.337445  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:39.337460  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:39.398684  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:39.398694  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:39.398704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:39.472608  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:39.472628  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:39.511488  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:39.517700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.092404  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:42.104757  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:42.104826  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:42.137172  404800 cri.go:89] found id: ""
	I1212 20:37:42.137189  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.137198  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:42.137204  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:42.137277  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:42.168320  404800 cri.go:89] found id: ""
	I1212 20:37:42.168336  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.168344  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:42.168349  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:42.168455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:42.202618  404800 cri.go:89] found id: ""
	I1212 20:37:42.202633  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.202641  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:42.202647  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:42.202714  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:42.232011  404800 cri.go:89] found id: ""
	I1212 20:37:42.232026  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.232034  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:42.232039  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:42.232101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:42.260345  404800 cri.go:89] found id: ""
	I1212 20:37:42.260360  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.260398  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:42.260403  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:42.260465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:42.286857  404800 cri.go:89] found id: ""
	I1212 20:37:42.286882  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.286890  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:42.286898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:42.286968  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:42.314846  404800 cri.go:89] found id: ""
	I1212 20:37:42.314870  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.314877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:42.314885  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:42.314898  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.382203  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:42.382223  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:42.397537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:42.397554  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:42.463930  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:42.463940  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:42.463951  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:42.539788  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:42.539809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:45.073125  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:45.091416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:45.091491  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:45.126675  404800 cri.go:89] found id: ""
	I1212 20:37:45.126699  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.126707  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:45.126714  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:45.126789  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:45.167457  404800 cri.go:89] found id: ""
	I1212 20:37:45.167475  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.167483  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:45.167489  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:45.167559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:45.226232  404800 cri.go:89] found id: ""
	I1212 20:37:45.226264  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.226292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:45.226299  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:45.226372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:45.273410  404800 cri.go:89] found id: ""
	I1212 20:37:45.273427  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.273435  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:45.273441  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:45.273513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:45.313155  404800 cri.go:89] found id: ""
	I1212 20:37:45.313171  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.313178  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:45.313183  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:45.313253  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:45.345614  404800 cri.go:89] found id: ""
	I1212 20:37:45.345640  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.345669  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:45.345688  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:45.345851  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:45.375592  404800 cri.go:89] found id: ""
	I1212 20:37:45.375606  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.375614  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:45.375622  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:45.375633  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:45.446441  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:45.446461  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:45.463226  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:45.463243  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:45.540934  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:45.540944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:45.540955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:45.610027  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:45.610051  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:48.142953  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:48.153422  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:48.153489  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:48.182170  404800 cri.go:89] found id: ""
	I1212 20:37:48.182185  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.182192  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:48.182197  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:48.182255  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:48.207474  404800 cri.go:89] found id: ""
	I1212 20:37:48.207498  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.207506  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:48.207511  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:48.207588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:48.232357  404800 cri.go:89] found id: ""
	I1212 20:37:48.232391  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.232399  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:48.232404  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:48.232472  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:48.257989  404800 cri.go:89] found id: ""
	I1212 20:37:48.258016  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.258024  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:48.258029  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:48.258095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:48.282918  404800 cri.go:89] found id: ""
	I1212 20:37:48.282932  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.282940  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:48.282945  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:48.283008  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:48.309285  404800 cri.go:89] found id: ""
	I1212 20:37:48.309299  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.309306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:48.309311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:48.309367  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:48.335545  404800 cri.go:89] found id: ""
	I1212 20:37:48.335559  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.335566  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:48.335573  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:48.335586  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:48.401770  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:48.401789  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:48.416320  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:48.416336  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:48.501926  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:48.501944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:48.501955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:48.576534  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:48.576555  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:51.105155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:51.115964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:51.116028  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:51.145401  404800 cri.go:89] found id: ""
	I1212 20:37:51.145416  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.145433  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:51.145445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:51.145517  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:51.172664  404800 cri.go:89] found id: ""
	I1212 20:37:51.172679  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.172685  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:51.172690  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:51.172753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:51.198093  404800 cri.go:89] found id: ""
	I1212 20:37:51.198108  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.198115  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:51.198120  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:51.198179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:51.223420  404800 cri.go:89] found id: ""
	I1212 20:37:51.223433  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.223449  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:51.223454  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:51.223510  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:51.253134  404800 cri.go:89] found id: ""
	I1212 20:37:51.253157  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.253164  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:51.253170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:51.253236  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:51.278738  404800 cri.go:89] found id: ""
	I1212 20:37:51.278753  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.278761  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:51.278766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:51.278821  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:51.304296  404800 cri.go:89] found id: ""
	I1212 20:37:51.304311  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.304318  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:51.304325  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:51.304346  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:51.370289  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:51.370308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:51.385101  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:51.385116  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:51.449107  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:51.449117  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:51.449127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:51.519024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:51.519047  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:54.054216  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:54.064710  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:54.064769  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:54.091620  404800 cri.go:89] found id: ""
	I1212 20:37:54.091634  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.091641  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:54.091646  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:54.091701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:54.122000  404800 cri.go:89] found id: ""
	I1212 20:37:54.122013  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.122020  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:54.122025  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:54.122081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:54.151439  404800 cri.go:89] found id: ""
	I1212 20:37:54.151454  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.151461  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:54.151466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:54.151520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:54.180154  404800 cri.go:89] found id: ""
	I1212 20:37:54.180168  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.180175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:54.180180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:54.180235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:54.206927  404800 cri.go:89] found id: ""
	I1212 20:37:54.206947  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.206954  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:54.206959  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:54.207014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:54.231274  404800 cri.go:89] found id: ""
	I1212 20:37:54.231288  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.231306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:54.231312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:54.231366  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:54.259379  404800 cri.go:89] found id: ""
	I1212 20:37:54.259395  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.259402  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:54.259410  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:54.259420  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:54.325217  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:54.325237  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:54.339913  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:54.339930  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:54.403764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:54.403774  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:54.403786  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:54.474019  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:54.474039  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.003568  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:57.016502  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:57.016560  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:57.042988  404800 cri.go:89] found id: ""
	I1212 20:37:57.043003  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.043010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:57.043015  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:57.043072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:57.071640  404800 cri.go:89] found id: ""
	I1212 20:37:57.071654  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.071661  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:57.071666  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:57.071737  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:57.098101  404800 cri.go:89] found id: ""
	I1212 20:37:57.098115  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.098123  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:57.098128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:57.098185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:57.128276  404800 cri.go:89] found id: ""
	I1212 20:37:57.128300  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.128307  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:57.128312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:57.128432  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:57.158908  404800 cri.go:89] found id: ""
	I1212 20:37:57.158922  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.158930  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:57.158939  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:57.159004  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:57.186146  404800 cri.go:89] found id: ""
	I1212 20:37:57.186161  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.186169  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:57.186174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:57.186233  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:57.210969  404800 cri.go:89] found id: ""
	I1212 20:37:57.210984  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.210991  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:57.210999  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:57.211017  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:57.225391  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:57.225407  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:57.289597  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:57.289607  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:57.289617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:57.362750  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:57.362771  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.396453  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:57.396470  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:59.967653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:59.977921  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:59.977984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:00.032267  404800 cri.go:89] found id: ""
	I1212 20:38:00.032297  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.032306  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:00.032312  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:00.032410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:00.203733  404800 cri.go:89] found id: ""
	I1212 20:38:00.203752  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.203760  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:00.203766  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:00.203831  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:00.252579  404800 cri.go:89] found id: ""
	I1212 20:38:00.252596  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.252604  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:00.252610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:00.252678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:00.301983  404800 cri.go:89] found id: ""
	I1212 20:38:00.302000  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.302009  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:00.302014  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:00.302081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:00.336785  404800 cri.go:89] found id: ""
	I1212 20:38:00.336813  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.336821  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:00.336827  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:00.336905  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:00.369703  404800 cri.go:89] found id: ""
	I1212 20:38:00.369720  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.369728  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:00.369749  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:00.369837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:00.404624  404800 cri.go:89] found id: ""
	I1212 20:38:00.404641  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.404649  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:00.404657  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:00.404669  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:00.473595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:00.473616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:00.493555  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:00.493572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:00.568400  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:00.568411  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:00.568425  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:00.641391  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:00.641416  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:03.171500  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:03.182094  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:03.182153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:03.207380  404800 cri.go:89] found id: ""
	I1212 20:38:03.207395  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.207402  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:03.207407  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:03.207465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:03.232766  404800 cri.go:89] found id: ""
	I1212 20:38:03.232781  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.232788  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:03.232793  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:03.232856  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:03.263589  404800 cri.go:89] found id: ""
	I1212 20:38:03.263604  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.263611  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:03.263620  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:03.263678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:03.289719  404800 cri.go:89] found id: ""
	I1212 20:38:03.289734  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.289741  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:03.289755  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:03.289815  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:03.316755  404800 cri.go:89] found id: ""
	I1212 20:38:03.316770  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.316778  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:03.316783  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:03.316845  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:03.344424  404800 cri.go:89] found id: ""
	I1212 20:38:03.344438  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.344445  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:03.344451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:03.344508  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:03.371242  404800 cri.go:89] found id: ""
	I1212 20:38:03.371257  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.371265  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:03.371273  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:03.371284  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:03.439155  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:03.439177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:03.456896  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:03.456912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:03.536136  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:03.536146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:03.536159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:03.610647  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:03.610666  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:06.146575  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:06.157383  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:06.157441  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:06.183306  404800 cri.go:89] found id: ""
	I1212 20:38:06.183321  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.183329  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:06.183334  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:06.183393  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:06.210325  404800 cri.go:89] found id: ""
	I1212 20:38:06.210340  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.210348  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:06.210353  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:06.210411  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:06.235611  404800 cri.go:89] found id: ""
	I1212 20:38:06.235625  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.235632  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:06.235638  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:06.235699  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:06.261846  404800 cri.go:89] found id: ""
	I1212 20:38:06.261860  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.261867  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:06.261872  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:06.261938  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:06.290103  404800 cri.go:89] found id: ""
	I1212 20:38:06.290116  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.290123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:06.290128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:06.290185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:06.316022  404800 cri.go:89] found id: ""
	I1212 20:38:06.316037  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.316044  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:06.316049  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:06.316107  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:06.342973  404800 cri.go:89] found id: ""
	I1212 20:38:06.342988  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.342996  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:06.343004  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:06.343015  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:06.413249  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:06.413270  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:06.428467  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:06.428492  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:06.521492  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:06.521503  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:06.521513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:06.591077  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:06.591100  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.125976  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:09.136849  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:09.136908  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:09.163513  404800 cri.go:89] found id: ""
	I1212 20:38:09.163528  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.163535  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:09.163541  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:09.163603  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:09.194011  404800 cri.go:89] found id: ""
	I1212 20:38:09.194026  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.194033  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:09.194038  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:09.194098  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:09.223187  404800 cri.go:89] found id: ""
	I1212 20:38:09.223201  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.223214  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:09.223219  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:09.223278  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:09.253410  404800 cri.go:89] found id: ""
	I1212 20:38:09.253424  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.253431  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:09.253436  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:09.253509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:09.278330  404800 cri.go:89] found id: ""
	I1212 20:38:09.278344  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.278351  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:09.278356  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:09.278416  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:09.307840  404800 cri.go:89] found id: ""
	I1212 20:38:09.307854  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.307861  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:09.307866  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:09.307924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:09.335632  404800 cri.go:89] found id: ""
	I1212 20:38:09.335646  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.335653  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:09.335660  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:09.335671  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:09.406024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:09.406045  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.434314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:09.434331  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:09.515858  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:09.515880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:09.532868  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:09.532885  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:09.599150  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.099436  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:12.110285  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:12.110345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:12.135810  404800 cri.go:89] found id: ""
	I1212 20:38:12.135825  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.135832  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:12.135837  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:12.135897  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:12.160429  404800 cri.go:89] found id: ""
	I1212 20:38:12.160444  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.160451  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:12.160456  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:12.160511  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:12.187065  404800 cri.go:89] found id: ""
	I1212 20:38:12.187080  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.187087  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:12.187092  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:12.187154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:12.212658  404800 cri.go:89] found id: ""
	I1212 20:38:12.212673  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.212681  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:12.212686  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:12.212743  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:12.238821  404800 cri.go:89] found id: ""
	I1212 20:38:12.238836  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.238843  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:12.238848  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:12.238909  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:12.265300  404800 cri.go:89] found id: ""
	I1212 20:38:12.265315  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.265322  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:12.265332  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:12.265392  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:12.292396  404800 cri.go:89] found id: ""
	I1212 20:38:12.292410  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.292418  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:12.292435  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:12.292445  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:12.358716  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:12.358736  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:12.374039  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:12.374056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:12.438679  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.438690  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:12.438701  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:12.519199  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:12.519218  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:15.058664  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:15.078525  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:15.078590  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:15.105060  404800 cri.go:89] found id: ""
	I1212 20:38:15.105075  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.105082  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:15.105088  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:15.105153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:15.133041  404800 cri.go:89] found id: ""
	I1212 20:38:15.133056  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.133063  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:15.133068  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:15.133133  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:15.160326  404800 cri.go:89] found id: ""
	I1212 20:38:15.160340  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.160347  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:15.160353  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:15.160435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:15.187814  404800 cri.go:89] found id: ""
	I1212 20:38:15.187828  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.187835  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:15.187840  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:15.187900  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:15.227819  404800 cri.go:89] found id: ""
	I1212 20:38:15.227833  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.227839  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:15.227844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:15.227901  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:15.255383  404800 cri.go:89] found id: ""
	I1212 20:38:15.255398  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.255404  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:15.255410  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:15.255468  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:15.280977  404800 cri.go:89] found id: ""
	I1212 20:38:15.280991  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.280997  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:15.281005  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:15.281022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:15.347810  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:15.347832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:15.362524  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:15.362541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:15.427106  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:15.427116  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:15.427127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:15.497224  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:15.497244  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:18.029289  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:18.044111  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:18.044210  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:18.071723  404800 cri.go:89] found id: ""
	I1212 20:38:18.071737  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.071745  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:18.071750  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:18.071810  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:18.099105  404800 cri.go:89] found id: ""
	I1212 20:38:18.099119  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.099126  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:18.099131  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:18.099187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:18.123656  404800 cri.go:89] found id: ""
	I1212 20:38:18.123670  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.123677  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:18.123682  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:18.123739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:18.150020  404800 cri.go:89] found id: ""
	I1212 20:38:18.150033  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.150040  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:18.150045  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:18.150101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:18.174527  404800 cri.go:89] found id: ""
	I1212 20:38:18.174541  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.174548  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:18.174552  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:18.174608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:18.198686  404800 cri.go:89] found id: ""
	I1212 20:38:18.198701  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.198716  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:18.198722  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:18.198779  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:18.223482  404800 cri.go:89] found id: ""
	I1212 20:38:18.223496  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.223512  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:18.223521  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:18.223531  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:18.289154  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:18.289176  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:18.303954  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:18.303970  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:18.371467  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:18.371477  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:18.371493  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:18.440117  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:18.440138  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:20.983282  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:20.993766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:20.993829  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:21.020992  404800 cri.go:89] found id: ""
	I1212 20:38:21.021006  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.021014  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:21.021019  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:21.021081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:21.047844  404800 cri.go:89] found id: ""
	I1212 20:38:21.047857  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.047865  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:21.047869  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:21.047930  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:21.073011  404800 cri.go:89] found id: ""
	I1212 20:38:21.073025  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.073033  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:21.073038  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:21.073095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:21.098802  404800 cri.go:89] found id: ""
	I1212 20:38:21.098816  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.098823  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:21.098829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:21.098884  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:21.127579  404800 cri.go:89] found id: ""
	I1212 20:38:21.127594  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.127601  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:21.127606  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:21.127672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:21.154921  404800 cri.go:89] found id: ""
	I1212 20:38:21.154935  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.154942  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:21.154947  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:21.155001  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:21.181275  404800 cri.go:89] found id: ""
	I1212 20:38:21.181290  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.181297  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:21.181304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:21.181316  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:21.197100  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:21.197118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:21.263963  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:21.263974  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:21.263991  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:21.335974  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:21.335994  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:21.364201  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:21.364220  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:23.937090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:23.947413  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:23.947474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:23.973243  404800 cri.go:89] found id: ""
	I1212 20:38:23.973258  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.973265  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:23.973270  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:23.973324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:23.999530  404800 cri.go:89] found id: ""
	I1212 20:38:23.999545  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.999552  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:23.999557  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:23.999616  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:24.030165  404800 cri.go:89] found id: ""
	I1212 20:38:24.030180  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.030187  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:24.030193  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:24.030254  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:24.059776  404800 cri.go:89] found id: ""
	I1212 20:38:24.059792  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.059799  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:24.059804  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:24.059882  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:24.086292  404800 cri.go:89] found id: ""
	I1212 20:38:24.086306  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.086330  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:24.086338  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:24.086427  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:24.112150  404800 cri.go:89] found id: ""
	I1212 20:38:24.112164  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.112180  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:24.112185  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:24.112240  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:24.137517  404800 cri.go:89] found id: ""
	I1212 20:38:24.137532  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.137539  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:24.137547  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:24.137557  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:24.207037  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:24.207056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:24.222129  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:24.222144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:24.288581  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:24.288595  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:24.288605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:24.357884  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:24.357903  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:26.887217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:26.897518  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:26.897580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:26.926965  404800 cri.go:89] found id: ""
	I1212 20:38:26.926980  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.926987  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:26.926992  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:26.927052  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:26.952974  404800 cri.go:89] found id: ""
	I1212 20:38:26.952988  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.952995  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:26.953000  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:26.953060  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:26.978786  404800 cri.go:89] found id: ""
	I1212 20:38:26.978801  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.978808  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:26.978813  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:26.978870  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:27.008564  404800 cri.go:89] found id: ""
	I1212 20:38:27.008580  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.008590  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:27.008595  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:27.008659  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:27.036286  404800 cri.go:89] found id: ""
	I1212 20:38:27.036301  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.036308  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:27.036313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:27.036391  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:27.061515  404800 cri.go:89] found id: ""
	I1212 20:38:27.061529  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.061536  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:27.061541  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:27.061604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:27.090603  404800 cri.go:89] found id: ""
	I1212 20:38:27.090617  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.090624  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:27.090632  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:27.090642  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:27.159097  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:27.159107  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:27.159118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:27.228300  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:27.228321  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:27.258850  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:27.258867  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:27.328117  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:27.328139  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:29.843406  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:29.853466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:29.853526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:29.878238  404800 cri.go:89] found id: ""
	I1212 20:38:29.878253  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.878260  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:29.878265  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:29.878323  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:29.907469  404800 cri.go:89] found id: ""
	I1212 20:38:29.907483  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.907490  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:29.907495  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:29.907550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:29.932873  404800 cri.go:89] found id: ""
	I1212 20:38:29.932887  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.932894  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:29.932900  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:29.932962  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:29.958139  404800 cri.go:89] found id: ""
	I1212 20:38:29.958153  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.958160  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:29.958165  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:29.958222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:29.984390  404800 cri.go:89] found id: ""
	I1212 20:38:29.984405  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.984412  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:29.984416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:29.984474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:30.027335  404800 cri.go:89] found id: ""
	I1212 20:38:30.027351  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.027360  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:30.027365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:30.027440  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:30.094850  404800 cri.go:89] found id: ""
	I1212 20:38:30.094867  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.094883  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:30.094911  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:30.094939  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:30.129199  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:30.129217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:30.196813  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:30.196832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:30.212809  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:30.212829  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:30.281108  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:30.281119  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:30.281130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:32.853025  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:32.863369  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:32.863434  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:32.890487  404800 cri.go:89] found id: ""
	I1212 20:38:32.890501  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.890508  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:32.890513  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:32.890570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:32.915071  404800 cri.go:89] found id: ""
	I1212 20:38:32.915085  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.915093  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:32.915098  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:32.915155  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:32.940096  404800 cri.go:89] found id: ""
	I1212 20:38:32.940117  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.940131  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:32.940142  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:32.940234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:32.965615  404800 cri.go:89] found id: ""
	I1212 20:38:32.965629  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.965644  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:32.965649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:32.965705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:32.990438  404800 cri.go:89] found id: ""
	I1212 20:38:32.990452  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.990459  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:32.990466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:32.990527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:33.018112  404800 cri.go:89] found id: ""
	I1212 20:38:33.018134  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.018141  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:33.018146  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:33.018213  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:33.045014  404800 cri.go:89] found id: ""
	I1212 20:38:33.045029  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.045036  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:33.045043  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:33.045054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:33.116627  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:33.116649  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:33.131589  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:33.131605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:33.200143  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:33.200152  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:33.200165  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:33.270338  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:33.270359  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:35.806115  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:35.816131  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:35.816187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:35.841646  404800 cri.go:89] found id: ""
	I1212 20:38:35.841660  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.841667  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:35.841672  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:35.841728  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:35.871233  404800 cri.go:89] found id: ""
	I1212 20:38:35.871247  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.871254  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:35.871259  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:35.871316  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:35.896270  404800 cri.go:89] found id: ""
	I1212 20:38:35.896285  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.896292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:35.896297  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:35.896354  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:35.923679  404800 cri.go:89] found id: ""
	I1212 20:38:35.923693  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.923700  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:35.923705  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:35.923796  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:35.950841  404800 cri.go:89] found id: ""
	I1212 20:38:35.950856  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.950862  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:35.950867  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:35.950924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:35.981198  404800 cri.go:89] found id: ""
	I1212 20:38:35.981212  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.981219  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:35.981224  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:35.981282  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:36.016848  404800 cri.go:89] found id: ""
	I1212 20:38:36.016865  404800 logs.go:282] 0 containers: []
	W1212 20:38:36.016872  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:36.016881  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:36.016892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:36.085541  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:36.085562  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:36.100886  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:36.100904  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:36.169874  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:36.169886  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:36.169897  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:36.239866  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:36.239886  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:38.770757  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:38.781375  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:38.781433  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:38.809421  404800 cri.go:89] found id: ""
	I1212 20:38:38.809436  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.809443  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:38.809448  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:38.809506  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:38.839566  404800 cri.go:89] found id: ""
	I1212 20:38:38.839579  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.839586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:38.839591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:38.839652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:38.865187  404800 cri.go:89] found id: ""
	I1212 20:38:38.865201  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.865208  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:38.865213  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:38.865272  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:38.890808  404800 cri.go:89] found id: ""
	I1212 20:38:38.890822  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.890829  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:38.890835  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:38.890891  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:38.917091  404800 cri.go:89] found id: ""
	I1212 20:38:38.917104  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.917117  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:38.917122  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:38.917179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:38.942942  404800 cri.go:89] found id: ""
	I1212 20:38:38.942957  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.942964  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:38.942970  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:38.943030  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:38.973257  404800 cri.go:89] found id: ""
	I1212 20:38:38.973271  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.973278  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:38.973286  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:38.973296  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:39.043336  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:39.043356  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:39.072568  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:39.072588  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:39.140916  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:39.140937  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:39.157933  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:39.157949  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:39.223417  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:41.723637  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:41.734660  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:41.734716  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:41.767247  404800 cri.go:89] found id: ""
	I1212 20:38:41.767262  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.767269  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:41.767275  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:41.767328  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:41.796221  404800 cri.go:89] found id: ""
	I1212 20:38:41.796235  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.796248  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:41.796253  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:41.796312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:41.821187  404800 cri.go:89] found id: ""
	I1212 20:38:41.821203  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.821216  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:41.821221  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:41.821284  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:41.847287  404800 cri.go:89] found id: ""
	I1212 20:38:41.847301  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.847308  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:41.847313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:41.847372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:41.872067  404800 cri.go:89] found id: ""
	I1212 20:38:41.872082  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.872089  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:41.872093  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:41.872152  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:41.897796  404800 cri.go:89] found id: ""
	I1212 20:38:41.897811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.897818  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:41.897823  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:41.897881  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:41.923795  404800 cri.go:89] found id: ""
	I1212 20:38:41.923811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.923818  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:41.923825  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:41.923836  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:41.990470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:41.990491  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:42.009111  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:42.009130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:42.088409  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:42.088421  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:42.088433  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:42.192507  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:42.192534  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:44.727139  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:44.739542  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:44.739600  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:44.773501  404800 cri.go:89] found id: ""
	I1212 20:38:44.773515  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.773522  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:44.773527  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:44.773589  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:44.800128  404800 cri.go:89] found id: ""
	I1212 20:38:44.800142  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.800149  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:44.800154  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:44.800211  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:44.825549  404800 cri.go:89] found id: ""
	I1212 20:38:44.825563  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.825571  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:44.825576  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:44.825641  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:44.851616  404800 cri.go:89] found id: ""
	I1212 20:38:44.851630  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.851637  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:44.851642  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:44.851701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:44.877278  404800 cri.go:89] found id: ""
	I1212 20:38:44.877293  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.877300  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:44.877305  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:44.877365  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:44.905623  404800 cri.go:89] found id: ""
	I1212 20:38:44.905637  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.905644  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:44.905649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:44.905705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:44.931299  404800 cri.go:89] found id: ""
	I1212 20:38:44.931313  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.931319  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:44.931327  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:44.931338  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:44.998840  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:44.998865  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:45.080550  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:45.080572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:45.173764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:45.173775  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:45.173787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:45.264449  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:45.264506  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:47.816513  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:47.826919  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:47.826978  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:47.856068  404800 cri.go:89] found id: ""
	I1212 20:38:47.856083  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.856090  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:47.856095  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:47.856154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:47.883508  404800 cri.go:89] found id: ""
	I1212 20:38:47.883522  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.883529  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:47.883534  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:47.883595  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:47.909513  404800 cri.go:89] found id: ""
	I1212 20:38:47.909527  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.909534  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:47.909539  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:47.909617  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:47.939000  404800 cri.go:89] found id: ""
	I1212 20:38:47.939015  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.939022  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:47.939027  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:47.939084  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:47.965875  404800 cri.go:89] found id: ""
	I1212 20:38:47.965889  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.965897  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:47.965902  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:47.965975  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:47.992041  404800 cri.go:89] found id: ""
	I1212 20:38:47.992056  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.992063  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:47.992068  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:47.992127  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:48.022837  404800 cri.go:89] found id: ""
	I1212 20:38:48.022852  404800 logs.go:282] 0 containers: []
	W1212 20:38:48.022860  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:48.022867  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:48.022880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:48.039393  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:48.039410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:48.107317  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:48.107328  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:48.107340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:48.175841  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:48.175861  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:48.210572  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:48.210594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:50.783090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:50.796736  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:50.796840  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:50.825233  404800 cri.go:89] found id: ""
	I1212 20:38:50.825248  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.825255  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:50.825261  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:50.825319  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:50.852180  404800 cri.go:89] found id: ""
	I1212 20:38:50.852194  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.852201  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:50.852206  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:50.852262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:50.878747  404800 cri.go:89] found id: ""
	I1212 20:38:50.878763  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.878770  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:50.878775  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:50.878835  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:50.904522  404800 cri.go:89] found id: ""
	I1212 20:38:50.904536  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.904543  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:50.904548  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:50.904604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:50.931344  404800 cri.go:89] found id: ""
	I1212 20:38:50.931360  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.931367  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:50.931372  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:50.931428  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:50.957483  404800 cri.go:89] found id: ""
	I1212 20:38:50.957498  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.957505  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:50.957510  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:50.957568  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:50.982756  404800 cri.go:89] found id: ""
	I1212 20:38:50.982771  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.982778  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:50.982785  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:50.982796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:51.050968  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:51.050990  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:51.066537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:51.066556  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:51.139075  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:51.139089  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:51.139101  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:51.210713  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:51.210734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.744531  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:53.755115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:53.755176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:53.782428  404800 cri.go:89] found id: ""
	I1212 20:38:53.782443  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.782450  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:53.782455  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:53.782513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:53.809102  404800 cri.go:89] found id: ""
	I1212 20:38:53.809116  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.809123  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:53.809128  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:53.809188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:53.836479  404800 cri.go:89] found id: ""
	I1212 20:38:53.836492  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.836500  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:53.836505  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:53.836567  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:53.862110  404800 cri.go:89] found id: ""
	I1212 20:38:53.862124  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.862131  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:53.862136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:53.862193  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:53.888092  404800 cri.go:89] found id: ""
	I1212 20:38:53.888112  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.888119  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:53.888124  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:53.888188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:53.918381  404800 cri.go:89] found id: ""
	I1212 20:38:53.918412  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.918419  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:53.918425  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:53.918482  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:53.944685  404800 cri.go:89] found id: ""
	I1212 20:38:53.944700  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.944707  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:53.944715  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:53.944726  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.976361  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:53.976398  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:54.043617  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:54.043638  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:54.059716  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:54.059735  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:54.127525  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:54.127535  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:54.127550  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:56.697671  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:56.712906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:56.712987  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:56.745699  404800 cri.go:89] found id: ""
	I1212 20:38:56.745713  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.745721  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:56.745726  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:56.745780  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:56.774995  404800 cri.go:89] found id: ""
	I1212 20:38:56.775008  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.775015  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:56.775022  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:56.775076  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:56.801088  404800 cri.go:89] found id: ""
	I1212 20:38:56.801102  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.801109  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:56.801115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:56.801171  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:56.825939  404800 cri.go:89] found id: ""
	I1212 20:38:56.825953  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.825960  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:56.825965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:56.826020  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:56.851013  404800 cri.go:89] found id: ""
	I1212 20:38:56.851028  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.851035  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:56.851040  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:56.851099  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:56.875791  404800 cri.go:89] found id: ""
	I1212 20:38:56.875815  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.875823  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:56.875829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:56.875894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:56.902106  404800 cri.go:89] found id: ""
	I1212 20:38:56.902121  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.902128  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:56.902136  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:56.902146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:56.933095  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:56.933112  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:56.999748  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:56.999770  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:57.023866  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:57.023882  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:57.095113  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:57.095123  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:57.095133  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:59.665770  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:59.675717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:59.675792  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:59.701606  404800 cri.go:89] found id: ""
	I1212 20:38:59.701620  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.701626  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:59.701631  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:59.701688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:59.736582  404800 cri.go:89] found id: ""
	I1212 20:38:59.736597  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.736603  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:59.736609  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:59.736666  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:59.764566  404800 cri.go:89] found id: ""
	I1212 20:38:59.764588  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.764595  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:59.764602  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:59.764664  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:59.793759  404800 cri.go:89] found id: ""
	I1212 20:38:59.793774  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.793781  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:59.793786  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:59.793858  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:59.821810  404800 cri.go:89] found id: ""
	I1212 20:38:59.821824  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.821841  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:59.821846  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:59.821903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:59.851583  404800 cri.go:89] found id: ""
	I1212 20:38:59.851606  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.851614  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:59.851619  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:59.851688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:59.878726  404800 cri.go:89] found id: ""
	I1212 20:38:59.878740  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.878746  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:59.878754  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:59.878764  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:59.943708  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:59.943728  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:59.958686  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:59.958704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:00.056135  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:00.056146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:00.056159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:00.155066  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:00.155091  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:02.718200  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:02.729492  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:02.729550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:02.760544  404800 cri.go:89] found id: ""
	I1212 20:39:02.760559  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.760566  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:02.760571  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:02.760635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:02.792146  404800 cri.go:89] found id: ""
	I1212 20:39:02.792161  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.792174  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:02.792180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:02.792239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:02.818586  404800 cri.go:89] found id: ""
	I1212 20:39:02.818601  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.818609  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:02.818614  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:02.818678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:02.844172  404800 cri.go:89] found id: ""
	I1212 20:39:02.844187  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.844194  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:02.844199  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:02.844256  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:02.871047  404800 cri.go:89] found id: ""
	I1212 20:39:02.871061  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.871069  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:02.871074  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:02.871132  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:02.898048  404800 cri.go:89] found id: ""
	I1212 20:39:02.898062  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.898070  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:02.898075  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:02.898131  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:02.923194  404800 cri.go:89] found id: ""
	I1212 20:39:02.923209  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.923216  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:02.923224  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:02.923234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:02.988912  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:02.988932  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:03.004362  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:03.004410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:03.075259  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:03.075269  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:03.075280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:03.148856  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:03.148876  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:05.677035  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:05.686903  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:05.686961  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:05.722182  404800 cri.go:89] found id: ""
	I1212 20:39:05.722197  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.722204  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:05.722211  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:05.722309  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:05.756818  404800 cri.go:89] found id: ""
	I1212 20:39:05.756832  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.756839  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:05.756844  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:05.756946  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:05.785780  404800 cri.go:89] found id: ""
	I1212 20:39:05.785794  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.785801  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:05.785806  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:05.785862  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:05.816052  404800 cri.go:89] found id: ""
	I1212 20:39:05.816066  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.816073  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:05.816078  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:05.816134  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:05.841695  404800 cri.go:89] found id: ""
	I1212 20:39:05.841709  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.841716  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:05.841721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:05.841782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:05.868902  404800 cri.go:89] found id: ""
	I1212 20:39:05.868917  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.868924  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:05.868929  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:05.868998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:05.898574  404800 cri.go:89] found id: ""
	I1212 20:39:05.898589  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.898596  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:05.898603  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:05.898617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:05.966027  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:05.966048  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:05.980827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:05.980843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:06.048518  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:06.048528  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:06.048539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:06.118539  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:06.118566  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:08.648618  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:08.659086  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:08.659147  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:08.684568  404800 cri.go:89] found id: ""
	I1212 20:39:08.684583  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.684590  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:08.684595  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:08.684655  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:08.714848  404800 cri.go:89] found id: ""
	I1212 20:39:08.714862  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.714869  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:08.714873  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:08.714942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:08.749610  404800 cri.go:89] found id: ""
	I1212 20:39:08.749636  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.749643  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:08.749654  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:08.749720  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:08.780856  404800 cri.go:89] found id: ""
	I1212 20:39:08.780871  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.780878  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:08.780883  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:08.780943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:08.805202  404800 cri.go:89] found id: ""
	I1212 20:39:08.805216  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.805223  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:08.805228  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:08.805287  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:08.830301  404800 cri.go:89] found id: ""
	I1212 20:39:08.830317  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.830324  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:08.830329  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:08.830389  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:08.857083  404800 cri.go:89] found id: ""
	I1212 20:39:08.857098  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.857105  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:08.857113  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:08.857124  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:08.925442  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:08.925464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:08.940523  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:08.940539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:09.013233  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:09.013243  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:09.013254  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:09.085178  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:09.085198  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.613987  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:11.624006  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:11.624073  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:11.648868  404800 cri.go:89] found id: ""
	I1212 20:39:11.648883  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.648890  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:11.648902  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:11.648959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:11.673750  404800 cri.go:89] found id: ""
	I1212 20:39:11.673764  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.673771  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:11.673776  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:11.673837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:11.701310  404800 cri.go:89] found id: ""
	I1212 20:39:11.701324  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.701340  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:11.701347  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:11.701407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:11.728807  404800 cri.go:89] found id: ""
	I1212 20:39:11.728821  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.728828  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:11.728833  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:11.728898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:11.762671  404800 cri.go:89] found id: ""
	I1212 20:39:11.762706  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.762715  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:11.762720  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:11.762786  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:11.788450  404800 cri.go:89] found id: ""
	I1212 20:39:11.788481  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.788488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:11.788493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:11.788559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:11.816693  404800 cri.go:89] found id: ""
	I1212 20:39:11.816707  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.816714  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:11.816722  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:11.816732  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:11.886583  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:11.886593  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:11.886604  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:11.955026  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:11.955046  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.984471  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:11.984489  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:12.054196  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:12.054217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.569266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:14.579178  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:14.579234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:14.603297  404800 cri.go:89] found id: ""
	I1212 20:39:14.603312  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.603319  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:14.603324  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:14.603381  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:14.628304  404800 cri.go:89] found id: ""
	I1212 20:39:14.628318  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.628325  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:14.628330  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:14.628404  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:14.653112  404800 cri.go:89] found id: ""
	I1212 20:39:14.653126  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.653133  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:14.653138  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:14.653201  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:14.678048  404800 cri.go:89] found id: ""
	I1212 20:39:14.678063  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.678078  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:14.678083  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:14.678141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:14.710561  404800 cri.go:89] found id: ""
	I1212 20:39:14.710584  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.710592  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:14.710597  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:14.710662  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:14.744837  404800 cri.go:89] found id: ""
	I1212 20:39:14.744862  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.744870  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:14.744876  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:14.744943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:14.777906  404800 cri.go:89] found id: ""
	I1212 20:39:14.777920  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.777927  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:14.777936  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:14.777946  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:14.844303  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:14.844323  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.859158  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:14.859179  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:14.922392  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:14.922427  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:14.922438  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:14.992900  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:14.992920  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:17.545196  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:17.555712  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:17.555785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:17.582444  404800 cri.go:89] found id: ""
	I1212 20:39:17.582458  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.582465  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:17.582470  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:17.582527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:17.606892  404800 cri.go:89] found id: ""
	I1212 20:39:17.606906  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.606926  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:17.606932  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:17.606998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:17.631824  404800 cri.go:89] found id: ""
	I1212 20:39:17.631840  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.631846  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:17.631851  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:17.631906  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:17.658525  404800 cri.go:89] found id: ""
	I1212 20:39:17.658540  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.658548  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:17.658553  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:17.658610  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:17.687764  404800 cri.go:89] found id: ""
	I1212 20:39:17.687777  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.687784  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:17.687789  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:17.687844  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:17.720465  404800 cri.go:89] found id: ""
	I1212 20:39:17.720480  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.720488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:17.720493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:17.720561  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:17.758231  404800 cri.go:89] found id: ""
	I1212 20:39:17.758245  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.758261  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:17.758270  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:17.758281  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:17.838248  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:17.838280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:17.852734  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:17.852752  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:17.918178  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:17.918190  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:17.918202  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:17.985880  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:17.985901  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:20.529812  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:20.539894  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:20.539954  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:20.564821  404800 cri.go:89] found id: ""
	I1212 20:39:20.564834  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.564841  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:20.564846  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:20.564903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:20.594524  404800 cri.go:89] found id: ""
	I1212 20:39:20.594538  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.594544  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:20.594549  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:20.594606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:20.619997  404800 cri.go:89] found id: ""
	I1212 20:39:20.620011  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.620018  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:20.620023  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:20.620079  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:20.644542  404800 cri.go:89] found id: ""
	I1212 20:39:20.644557  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.644564  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:20.644569  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:20.644624  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:20.670273  404800 cri.go:89] found id: ""
	I1212 20:39:20.670289  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.670296  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:20.670302  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:20.670358  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:20.694691  404800 cri.go:89] found id: ""
	I1212 20:39:20.694705  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.694712  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:20.694717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:20.694771  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:20.739770  404800 cri.go:89] found id: ""
	I1212 20:39:20.739784  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.739791  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:20.739798  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:20.739809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:20.810407  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:20.810429  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:20.825194  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:20.825210  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:20.899009  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:20.899020  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:20.899032  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:20.977107  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:20.977129  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:23.510601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:23.521033  404800 kubeadm.go:602] duration metric: took 4m3.32729864s to restartPrimaryControlPlane
	W1212 20:39:23.521093  404800 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:39:23.521166  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:39:23.936973  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:39:23.949604  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:39:23.957638  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:39:23.957691  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:39:23.965470  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:39:23.965481  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:39:23.965536  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:39:23.973241  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:39:23.973300  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:39:23.980875  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:39:23.989722  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:39:23.989777  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:39:23.997778  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.007027  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:39:24.007112  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.016721  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:39:24.025622  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:39:24.025690  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:39:24.034033  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:39:24.077877  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:39:24.079077  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:39:24.152874  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:39:24.152937  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:39:24.152972  404800 kubeadm.go:319] OS: Linux
	I1212 20:39:24.153034  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:39:24.153081  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:39:24.153126  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:39:24.153178  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:39:24.153225  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:39:24.153271  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:39:24.153314  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:39:24.153363  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:39:24.153407  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:39:24.219483  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:39:24.219589  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:39:24.219678  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:39:24.228954  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:39:24.234481  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:39:24.234574  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:39:24.234638  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:39:24.234713  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:39:24.234772  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:39:24.234841  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:39:24.234896  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:39:24.234958  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:39:24.235017  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:39:24.235090  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:39:24.235172  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:39:24.235208  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:39:24.235263  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:39:24.294876  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:39:24.534877  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:39:24.632916  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:39:24.763704  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:39:25.183116  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:39:25.183864  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:39:25.186637  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:39:25.190125  404800 out.go:252]   - Booting up control plane ...
	I1212 20:39:25.190229  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:39:25.190325  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:39:25.190412  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:39:25.205322  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:39:25.205427  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:39:25.215814  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:39:25.216163  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:39:25.216236  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:39:25.353073  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:39:25.353188  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:43:25.353162  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000280513s
	I1212 20:43:25.353205  404800 kubeadm.go:319] 
	I1212 20:43:25.353282  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:43:25.353332  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:43:25.353453  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:43:25.353461  404800 kubeadm.go:319] 
	I1212 20:43:25.353609  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:43:25.353657  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:43:25.353688  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:43:25.353691  404800 kubeadm.go:319] 
	I1212 20:43:25.359119  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:43:25.359579  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:43:25.359715  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:43:25.360004  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:43:25.360010  404800 kubeadm.go:319] 
	I1212 20:43:25.360149  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:43:25.360245  404800 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000280513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:43:25.360353  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:43:25.770646  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:43:25.783563  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:43:25.783624  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:43:25.791806  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:43:25.791814  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:43:25.791862  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:43:25.799745  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:43:25.799799  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:43:25.807302  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:43:25.815035  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:43:25.815084  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:43:25.822960  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.831068  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:43:25.831122  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.838463  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:43:25.846379  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:43:25.846433  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:43:25.853821  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:43:25.894714  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:43:25.895009  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:43:25.961164  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:43:25.961230  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:43:25.961265  404800 kubeadm.go:319] OS: Linux
	I1212 20:43:25.961309  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:43:25.961355  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:43:25.961404  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:43:25.961451  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:43:25.961498  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:43:25.961544  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:43:25.961587  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:43:25.961634  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:43:25.961678  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:43:26.029509  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:43:26.029612  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:43:26.029701  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:43:26.038278  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:43:26.041933  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:43:26.042043  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:43:26.042118  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:43:26.042200  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:43:26.042265  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:43:26.042338  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:43:26.042395  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:43:26.042462  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:43:26.042527  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:43:26.042606  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:43:26.042683  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:43:26.042722  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:43:26.042781  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:43:26.129341  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:43:26.328670  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:43:26.553215  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:43:26.647700  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:43:26.895572  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:43:26.896106  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:43:26.898924  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:43:26.902076  404800 out.go:252]   - Booting up control plane ...
	I1212 20:43:26.902180  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:43:26.902266  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:43:26.902331  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:43:26.916276  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:43:26.916395  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:43:26.923968  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:43:26.925348  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:43:26.925393  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:43:27.058187  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:43:27.058300  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:47:27.059387  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001189054s
	I1212 20:47:27.059415  404800 kubeadm.go:319] 
	I1212 20:47:27.059512  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:47:27.059567  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:47:27.059889  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:47:27.059895  404800 kubeadm.go:319] 
	I1212 20:47:27.060100  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:47:27.060426  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:47:27.060479  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:47:27.060483  404800 kubeadm.go:319] 
	I1212 20:47:27.064619  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:47:27.065062  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:47:27.065168  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:47:27.065401  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:47:27.065405  404800 kubeadm.go:319] 
	I1212 20:47:27.065522  404800 kubeadm.go:403] duration metric: took 12m6.90957682s to StartCluster
	I1212 20:47:27.065550  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:47:27.065606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:47:27.065669  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:47:27.091473  404800 cri.go:89] found id: ""
	I1212 20:47:27.091488  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.091495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:47:27.091500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:47:27.091559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:47:27.118055  404800 cri.go:89] found id: ""
	I1212 20:47:27.118069  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.118076  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:47:27.118081  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:47:27.118141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:47:27.144553  404800 cri.go:89] found id: ""
	I1212 20:47:27.144567  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.144574  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:47:27.144579  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:47:27.144636  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:47:27.170138  404800 cri.go:89] found id: ""
	I1212 20:47:27.170152  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.170172  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:47:27.170177  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:47:27.170242  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:47:27.199222  404800 cri.go:89] found id: ""
	I1212 20:47:27.199236  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.199243  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:47:27.199248  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:47:27.199305  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:47:27.225906  404800 cri.go:89] found id: ""
	I1212 20:47:27.225921  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.225929  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:47:27.225934  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:47:27.225993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:47:27.251774  404800 cri.go:89] found id: ""
	I1212 20:47:27.251788  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.251795  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:47:27.251803  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:47:27.251843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:47:27.318965  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:47:27.318984  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:47:27.336153  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:47:27.336169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:47:27.403235  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:47:27.403245  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:47:27.403256  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:47:27.475348  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:47:27.475369  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 20:47:27.504551  404800 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:47:27.504592  404800 out.go:285] * 
	W1212 20:47:27.504699  404800 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.504759  404800 out.go:285] * 
	W1212 20:47:27.507341  404800 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:47:27.514164  404800 out.go:203] 
	W1212 20:47:27.517009  404800 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.517056  404800 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:47:27.517078  404800 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:47:27.520151  404800 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617557022Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617594914Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617644933Z" level=info msg="Create NRI interface"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617744979Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617956402Z" level=info msg="runtime interface created"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617981551Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617990003Z" level=info msg="runtime interface starting up..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618002294Z" level=info msg="starting plugins..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618017146Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618092166Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:35:18 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223066755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=efc21d87-a1b0-4de5-a48b-a3e014a5db32 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223827337Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e9bb6f76-9bf0-445e-a911-5989a7f224b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224384709Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=eb32b7e0-d164-45f4-be96-6799b271663a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224808771Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=192a05d5-754c-4620-9a7e-630a23b2f5d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225240365Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d03d55da-4587-4eea-8a9a-e52381826a03 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225676677Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c7d002dd-9552-4715-b7be-2078da811840 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.226165084Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=daf96e40-8252-45d3-a005-ea53669f5cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.033616408Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2a067e1-2c90-429c-b592-c0026a728c8d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.0344028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=faa7c5c2-57de-45d0-98b9-b1fc40b3897e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.034956632Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=86aed69e-89fa-4789-b7e0-66c21b53b655 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.035606867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e8b67148-2c8a-4d5b-8bc5-9c052262c589 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036159986Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56696416-3d0f-4c77-8dbb-77790563b13a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036707976Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ba267d45-19b4-448f-9f92-2993fe38692a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.037209312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=301781f2-4844-424c-a8ec-9528bb0007ad name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:47:28.738121   21209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:28.738951   21209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:28.740729   21209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:28.742132   21209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:28.743730   21209 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:47:28 up  3:30,  0 user,  load average: 0.10, 0.16, 0.52
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:47:26 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:47:26 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 961.
	Dec 12 20:47:26 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:26 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:27 functional-261311 kubelet[21018]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:27 functional-261311 kubelet[21018]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:27 functional-261311 kubelet[21018]: E1212 20:47:27.004016   21018 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:47:27 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:47:27 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:47:27 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 962.
	Dec 12 20:47:27 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:27 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:27 functional-261311 kubelet[21109]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:27 functional-261311 kubelet[21109]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:27 functional-261311 kubelet[21109]: E1212 20:47:27.764650   21109 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:47:27 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:47:27 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:47:28 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 963.
	Dec 12 20:47:28 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:28 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:28 functional-261311 kubelet[21149]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:28 functional-261311 kubelet[21149]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:28 functional-261311 kubelet[21149]: E1212 20:47:28.519032   21149 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:47:28 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:47:28 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (334.026886ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (734.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-261311 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-261311 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (63.979099ms)

                                                
                                                
-- stdout --
	{
	    "apiVersion": "v1",
	    "items": [],
	    "kind": "List",
	    "metadata": {
	        "resourceVersion": ""
	    }
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-261311 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (390.64505ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                       ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr                                            │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls --format table --alsologtostderr                                                                                       │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ update-context │ functional-205528 update-context --alsologtostderr -v=2                                                                                           │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ image          │ functional-205528 image ls                                                                                                                        │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ delete         │ -p functional-205528                                                                                                                              │ functional-205528 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │ 12 Dec 25 20:20 UTC │
	│ start          │ -p functional-261311 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:20 UTC │                     │
	│ start          │ -p functional-261311 --alsologtostderr -v=8                                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:28 UTC │                     │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.1                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:3.3                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add registry.k8s.io/pause:latest                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache add minikube-local-cache-test:functional-261311                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ functional-261311 cache delete minikube-local-cache-test:functional-261311                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ list                                                                                                                                              │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl images                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl rmi registry.k8s.io/pause:latest                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ cache          │ functional-261311 cache reload                                                                                                                    │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh            │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                                                  │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                                               │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ kubectl        │ functional-261311 kubectl -- --context functional-261311 get pods                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ start          │ -p functional-261311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:35:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:35:15.460416  404800 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:35:15.460537  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.460541  404800 out.go:374] Setting ErrFile to fd 2...
	I1212 20:35:15.460545  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.461281  404800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:35:15.461704  404800 out.go:368] Setting JSON to false
	I1212 20:35:15.462524  404800 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11868,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:35:15.462588  404800 start.go:143] virtualization:  
	I1212 20:35:15.465993  404800 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:35:15.469163  404800 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:35:15.469272  404800 notify.go:221] Checking for updates...
	I1212 20:35:15.475214  404800 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:35:15.478288  404800 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:35:15.481030  404800 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:35:15.483916  404800 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:35:15.486846  404800 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:35:15.490383  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:15.490523  404800 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:35:15.521733  404800 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:35:15.521840  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.586834  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.575092276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.586929  404800 docker.go:319] overlay module found
	I1212 20:35:15.590005  404800 out.go:179] * Using the docker driver based on existing profile
	I1212 20:35:15.592944  404800 start.go:309] selected driver: docker
	I1212 20:35:15.592962  404800 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.593077  404800 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:35:15.593201  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.653530  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.644295166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.653919  404800 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:35:15.653944  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:15.653992  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:15.654035  404800 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.657113  404800 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:35:15.659873  404800 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:35:15.662874  404800 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:35:15.665759  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:15.665839  404800 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:35:15.665900  404800 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:35:15.665919  404800 cache.go:65] Caching tarball of preloaded images
	I1212 20:35:15.666041  404800 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:35:15.666050  404800 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:35:15.666202  404800 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:35:15.685367  404800 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:35:15.685378  404800 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:35:15.685400  404800 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:35:15.685432  404800 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:35:15.685502  404800 start.go:364] duration metric: took 54.475µs to acquireMachinesLock for "functional-261311"
	I1212 20:35:15.685521  404800 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:35:15.685526  404800 fix.go:54] fixHost starting: 
	I1212 20:35:15.685789  404800 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:35:15.703273  404800 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:35:15.703293  404800 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:35:15.712450  404800 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:35:15.712481  404800 machine.go:94] provisionDockerMachine start ...
	I1212 20:35:15.712578  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.736656  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.736977  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.736984  404800 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:35:15.891915  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:15.891929  404800 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:35:15.891999  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.910460  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.910779  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.910787  404800 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:35:16.077690  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:16.077778  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.097025  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.097341  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.097354  404800 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:35:16.252758  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:35:16.252773  404800 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:35:16.252793  404800 ubuntu.go:190] setting up certificates
	I1212 20:35:16.252801  404800 provision.go:84] configureAuth start
	I1212 20:35:16.252918  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:16.270682  404800 provision.go:143] copyHostCerts
	I1212 20:35:16.270755  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:35:16.270763  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:35:16.270834  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:35:16.270926  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:35:16.270930  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:35:16.270953  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:35:16.271010  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:35:16.271014  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:35:16.271036  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:35:16.271079  404800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:35:16.466046  404800 provision.go:177] copyRemoteCerts
	I1212 20:35:16.466103  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:35:16.466141  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.490439  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:16.596331  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:35:16.614499  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:35:16.632168  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:35:16.649948  404800 provision.go:87] duration metric: took 397.124655ms to configureAuth
	I1212 20:35:16.649967  404800 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:35:16.650174  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:16.650275  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.667262  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.667562  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.667574  404800 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:35:17.020390  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:35:17.020403  404800 machine.go:97] duration metric: took 1.307915361s to provisionDockerMachine
	I1212 20:35:17.020413  404800 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:35:17.020431  404800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:35:17.020498  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:35:17.020542  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.039179  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.144817  404800 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:35:17.148499  404800 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:35:17.148517  404800 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:35:17.148528  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:35:17.148587  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:35:17.148671  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:35:17.148745  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:35:17.148790  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:35:17.156874  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:17.175633  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:35:17.193693  404800 start.go:296] duration metric: took 173.265259ms for postStartSetup
	I1212 20:35:17.193768  404800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:35:17.193829  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.212738  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.326054  404800 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:35:17.331128  404800 fix.go:56] duration metric: took 1.64559363s for fixHost
	I1212 20:35:17.331145  404800 start.go:83] releasing machines lock for "functional-261311", held for 1.645635346s
	I1212 20:35:17.331211  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:17.348942  404800 ssh_runner.go:195] Run: cat /version.json
	I1212 20:35:17.348993  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.349240  404800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:35:17.349288  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.377660  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.380423  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.480436  404800 ssh_runner.go:195] Run: systemctl --version
	I1212 20:35:17.572826  404800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:35:17.610243  404800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:35:17.614893  404800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:35:17.614954  404800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:35:17.623289  404800 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:35:17.623303  404800 start.go:496] detecting cgroup driver to use...
	I1212 20:35:17.623333  404800 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:35:17.623377  404800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:35:17.638845  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:35:17.652624  404800 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:35:17.652690  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:35:17.668971  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:35:17.682562  404800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:35:17.807109  404800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:35:17.921667  404800 docker.go:234] disabling docker service ...
	I1212 20:35:17.921741  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:35:17.940321  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:35:17.957092  404800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:35:18.087741  404800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:35:18.206163  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:35:18.219734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:35:18.233813  404800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:35:18.233881  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.242826  404800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:35:18.242900  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.252023  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.261290  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.270163  404800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:35:18.278452  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.287612  404800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.296129  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.305360  404800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:35:18.313008  404800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:35:18.320507  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:18.433496  404800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:35:18.624476  404800 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:35:18.624545  404800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:35:18.628455  404800 start.go:564] Will wait 60s for crictl version
	I1212 20:35:18.628509  404800 ssh_runner.go:195] Run: which crictl
	I1212 20:35:18.631901  404800 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:35:18.657967  404800 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:35:18.658043  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.686054  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.728907  404800 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:35:18.731836  404800 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:35:18.758101  404800 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:35:18.765430  404800 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:35:18.768359  404800 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:35:18.768498  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:18.768569  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.809159  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.809172  404800 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:35:18.809226  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.835786  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.835798  404800 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:35:18.835804  404800 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:35:18.835897  404800 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:35:18.835978  404800 ssh_runner.go:195] Run: crio config
	I1212 20:35:18.911975  404800 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:35:18.911996  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:18.912005  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:18.912021  404800 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:35:18.912048  404800 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:35:18.912174  404800 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:35:18.912242  404800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:35:18.919878  404800 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:35:18.919945  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:35:18.927506  404800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:35:18.940260  404800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:35:18.953546  404800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 20:35:18.966154  404800 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:35:18.969878  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:19.088694  404800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:35:19.456785  404800 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:35:19.456797  404800 certs.go:195] generating shared ca certs ...
	I1212 20:35:19.456811  404800 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:35:19.457015  404800 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:35:19.457061  404800 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:35:19.457083  404800 certs.go:257] generating profile certs ...
	I1212 20:35:19.457188  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:35:19.457266  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:35:19.457320  404800 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:35:19.457484  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:35:19.457522  404800 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:35:19.457530  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:35:19.457572  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:35:19.457613  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:35:19.457656  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:35:19.457720  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:19.458537  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:35:19.481387  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:35:19.503914  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:35:19.527911  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:35:19.547817  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:35:19.567001  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:35:19.585411  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:35:19.603199  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:35:19.621415  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:35:19.639746  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:35:19.657747  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:35:19.675414  404800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:35:19.688797  404800 ssh_runner.go:195] Run: openssl version
	I1212 20:35:19.695324  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.703181  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:35:19.710800  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714682  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714738  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.755943  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:35:19.764525  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.772260  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:35:19.780093  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783725  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783778  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.825039  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:35:19.832411  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.839917  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:35:19.847683  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851494  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851551  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.892840  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:35:19.900611  404800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:35:19.904415  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:35:19.945816  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:35:19.987206  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:35:20.028949  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:35:20.071640  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:35:20.114011  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:35:20.155956  404800 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:20.156040  404800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:35:20.156106  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.185271  404800 cri.go:89] found id: ""
	I1212 20:35:20.185335  404800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:35:20.193716  404800 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:35:20.193726  404800 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:35:20.193778  404800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:35:20.201404  404800 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.201928  404800 kubeconfig.go:125] found "functional-261311" server: "https://192.168.49.2:8441"
	I1212 20:35:20.203285  404800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:35:20.213068  404800 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 20:20:42.746943766 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:35:18.963900938 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:35:20.213088  404800 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:35:20.213099  404800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 20:35:20.213154  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.242899  404800 cri.go:89] found id: ""
	I1212 20:35:20.242960  404800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:35:20.261588  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:35:20.270004  404800 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 20:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 20:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 20:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 20:24 /etc/kubernetes/scheduler.conf
	
	I1212 20:35:20.270062  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:35:20.278110  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:35:20.285789  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.285844  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:35:20.293376  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.301132  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.301185  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.309065  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:35:20.316914  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.316967  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:35:20.324673  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:35:20.332520  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:20.381164  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.740495  404800 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.359307117s)
	I1212 20:35:21.740554  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.936349  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.006437  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.060809  404800 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:35:22.060899  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:22.561081  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.062037  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.561673  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.061283  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.561690  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.061084  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.561740  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.061753  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.561615  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.061476  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.561193  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.061088  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.561754  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.061218  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.561124  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.061364  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.561503  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.061616  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.561042  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.061002  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.561635  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.561100  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.061640  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.562032  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.061030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.561966  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.061881  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.561895  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.061604  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.062060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.061118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.561000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.061043  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.561911  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.061748  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.561627  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.561174  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.061190  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.561060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.061057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.561587  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.561122  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.061055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.561141  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.061107  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.560994  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.062000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.561057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.061151  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.561089  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.061007  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.561745  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.061094  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.561413  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.061652  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.561706  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.061685  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.561118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.061047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.561109  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.061626  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.561543  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.061374  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.561047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.062047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.561053  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.061760  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.561015  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.561602  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.061050  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.565101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.061738  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.561016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.061584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.561705  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.062021  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.561146  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.061266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.061786  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.561910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.062016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.561621  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.061104  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.561077  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.061034  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.561076  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.061095  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.062030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.561403  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.061217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.561772  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.061561  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.561252  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.061001  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.561813  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.061556  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.061061  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.561415  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.061155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.061682  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.561217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.061108  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.561055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.061653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.561105  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.061064  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.561836  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.061167  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.561650  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:22.061836  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:22.061921  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:22.088621  404800 cri.go:89] found id: ""
	I1212 20:36:22.088636  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.088643  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:22.088648  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:22.088710  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:22.115845  404800 cri.go:89] found id: ""
	I1212 20:36:22.115860  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.115867  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:22.115872  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:22.115934  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:22.145607  404800 cri.go:89] found id: ""
	I1212 20:36:22.145622  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.145629  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:22.145634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:22.145694  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:22.175762  404800 cri.go:89] found id: ""
	I1212 20:36:22.175782  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.175790  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:22.175795  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:22.175852  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:22.205262  404800 cri.go:89] found id: ""
	I1212 20:36:22.205277  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.205283  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:22.205288  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:22.205343  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:22.240968  404800 cri.go:89] found id: ""
	I1212 20:36:22.240981  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.240988  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:22.240993  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:22.241050  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:22.272662  404800 cri.go:89] found id: ""
	I1212 20:36:22.272676  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.272683  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:22.272691  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:22.272700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:22.301824  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:22.301841  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:22.370470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:22.370488  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:22.385289  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:22.385306  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:22.449648  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:22.449659  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:22.449670  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.019320  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:25.030277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:25.030345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:25.060950  404800 cri.go:89] found id: ""
	I1212 20:36:25.060975  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.060982  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:25.060988  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:25.061049  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:25.087641  404800 cri.go:89] found id: ""
	I1212 20:36:25.087663  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.087670  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:25.087675  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:25.087735  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:25.114870  404800 cri.go:89] found id: ""
	I1212 20:36:25.114885  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.114893  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:25.114899  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:25.114963  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:25.140642  404800 cri.go:89] found id: ""
	I1212 20:36:25.140664  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.140671  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:25.140677  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:25.140736  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:25.166644  404800 cri.go:89] found id: ""
	I1212 20:36:25.166658  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.166665  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:25.166671  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:25.166731  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:25.192547  404800 cri.go:89] found id: ""
	I1212 20:36:25.192561  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.192567  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:25.192572  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:25.192635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:25.231874  404800 cri.go:89] found id: ""
	I1212 20:36:25.231889  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.231895  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:25.231903  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:25.231914  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:25.315537  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:25.315559  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:25.330635  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:25.330654  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:25.395220  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:25.395260  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:25.395272  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.467585  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:25.467605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:27.999765  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:28.012318  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:28.012406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:28.038452  404800 cri.go:89] found id: ""
	I1212 20:36:28.038467  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.038475  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:28.038481  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:28.038550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:28.065565  404800 cri.go:89] found id: ""
	I1212 20:36:28.065579  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.065586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:28.065591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:28.065652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:28.091553  404800 cri.go:89] found id: ""
	I1212 20:36:28.091574  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.091581  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:28.091587  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:28.091651  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:28.117664  404800 cri.go:89] found id: ""
	I1212 20:36:28.117677  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.117684  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:28.117689  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:28.117747  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:28.143314  404800 cri.go:89] found id: ""
	I1212 20:36:28.143328  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.143335  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:28.143339  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:28.143396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:28.170365  404800 cri.go:89] found id: ""
	I1212 20:36:28.170379  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.170386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:28.170391  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:28.170450  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:28.194993  404800 cri.go:89] found id: ""
	I1212 20:36:28.195013  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.195019  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:28.195027  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:28.195037  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:28.264144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:28.264163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:28.294480  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:28.294497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:28.364064  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:28.364087  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:28.378788  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:28.378811  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:28.443238  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:30.944182  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:30.954580  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:30.954652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:30.981452  404800 cri.go:89] found id: ""
	I1212 20:36:30.981467  404800 logs.go:282] 0 containers: []
	W1212 20:36:30.981474  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:30.981479  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:30.981543  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:31.009852  404800 cri.go:89] found id: ""
	I1212 20:36:31.009868  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.009875  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:31.009881  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:31.009949  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:31.041648  404800 cri.go:89] found id: ""
	I1212 20:36:31.041664  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.041671  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:31.041676  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:31.041741  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:31.071159  404800 cri.go:89] found id: ""
	I1212 20:36:31.071194  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.071203  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:31.071208  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:31.071274  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:31.101318  404800 cri.go:89] found id: ""
	I1212 20:36:31.101333  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.101340  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:31.101345  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:31.101407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:31.128905  404800 cri.go:89] found id: ""
	I1212 20:36:31.128921  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.128937  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:31.128943  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:31.129019  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:31.156884  404800 cri.go:89] found id: ""
	I1212 20:36:31.156899  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.156906  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:31.156914  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:31.156924  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:31.229169  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:31.229188  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:31.244638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:31.244655  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:31.316835  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:31.316848  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:31.316866  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:31.386236  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:31.386258  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:33.917579  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:33.927716  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:33.927782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:33.952915  404800 cri.go:89] found id: ""
	I1212 20:36:33.952929  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.952936  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:33.952941  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:33.952998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:33.986667  404800 cri.go:89] found id: ""
	I1212 20:36:33.986681  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.986688  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:33.986693  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:33.986753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:34.017351  404800 cri.go:89] found id: ""
	I1212 20:36:34.017367  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.017374  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:34.017379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:34.017459  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:34.044495  404800 cri.go:89] found id: ""
	I1212 20:36:34.044509  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.044517  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:34.044522  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:34.044579  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:34.070939  404800 cri.go:89] found id: ""
	I1212 20:36:34.070953  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.070960  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:34.070964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:34.071022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:34.099384  404800 cri.go:89] found id: ""
	I1212 20:36:34.099398  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.099405  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:34.099411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:34.099469  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:34.125342  404800 cri.go:89] found id: ""
	I1212 20:36:34.125357  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.125364  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:34.125372  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:34.125383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:34.195370  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:34.195391  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:34.212114  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:34.212130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:34.294767  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:34.294788  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:34.294798  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:34.365333  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:34.365354  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:36.899244  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:36.909418  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:36.909481  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:36.934188  404800 cri.go:89] found id: ""
	I1212 20:36:36.934202  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.934219  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:36.934224  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:36.934281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:36.959806  404800 cri.go:89] found id: ""
	I1212 20:36:36.959821  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.959828  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:36.959832  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:36.959898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:36.986148  404800 cri.go:89] found id: ""
	I1212 20:36:36.986162  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.986169  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:36.986174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:36.986231  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:37.017876  404800 cri.go:89] found id: ""
	I1212 20:36:37.017892  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.017899  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:37.017905  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:37.017971  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:37.047901  404800 cri.go:89] found id: ""
	I1212 20:36:37.047915  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.047921  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:37.047926  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:37.047985  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:37.076531  404800 cri.go:89] found id: ""
	I1212 20:36:37.076546  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.076553  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:37.076558  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:37.076615  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:37.102846  404800 cri.go:89] found id: ""
	I1212 20:36:37.102870  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.102877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:37.102885  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:37.102896  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:37.134007  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:37.134024  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:37.207327  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:37.207352  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:37.222638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:37.222657  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:37.290385  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:37.290395  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:37.290406  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:39.860964  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:39.871500  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:39.871558  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:39.898740  404800 cri.go:89] found id: ""
	I1212 20:36:39.898755  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.898762  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:39.898767  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:39.898830  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:39.925154  404800 cri.go:89] found id: ""
	I1212 20:36:39.925168  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.925175  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:39.925180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:39.925239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:39.950208  404800 cri.go:89] found id: ""
	I1212 20:36:39.950223  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.950229  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:39.950234  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:39.950297  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:39.976836  404800 cri.go:89] found id: ""
	I1212 20:36:39.976851  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.976857  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:39.976863  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:39.976936  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:40.009665  404800 cri.go:89] found id: ""
	I1212 20:36:40.009695  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.010153  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:40.010168  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:40.010262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:40.067797  404800 cri.go:89] found id: ""
	I1212 20:36:40.067813  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.067838  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:40.067844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:40.067922  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:40.103262  404800 cri.go:89] found id: ""
	I1212 20:36:40.103277  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.103287  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:40.103295  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:40.103308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:40.119554  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:40.119573  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:40.195337  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:40.195364  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:40.195376  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:40.270010  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:40.270029  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:40.299631  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:40.299652  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:42.866117  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:42.876408  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:42.876467  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:42.901308  404800 cri.go:89] found id: ""
	I1212 20:36:42.901321  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.901328  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:42.901333  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:42.901396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:42.925954  404800 cri.go:89] found id: ""
	I1212 20:36:42.925968  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.925975  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:42.925980  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:42.926041  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:42.951209  404800 cri.go:89] found id: ""
	I1212 20:36:42.951224  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.951231  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:42.951236  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:42.951296  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:42.977995  404800 cri.go:89] found id: ""
	I1212 20:36:42.978010  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.978017  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:42.978022  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:42.978082  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:43.004860  404800 cri.go:89] found id: ""
	I1212 20:36:43.004875  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.004892  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:43.004898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:43.004973  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:43.040400  404800 cri.go:89] found id: ""
	I1212 20:36:43.040414  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.040421  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:43.040427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:43.040485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:43.068090  404800 cri.go:89] found id: ""
	I1212 20:36:43.068104  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.068122  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:43.068130  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:43.068144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:43.140175  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:43.140195  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:43.154957  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:43.154976  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:43.225443  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:43.225462  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:43.225473  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:43.307152  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:43.307175  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:45.837432  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:45.847721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:45.847783  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:45.874064  404800 cri.go:89] found id: ""
	I1212 20:36:45.874118  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.874125  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:45.874131  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:45.874197  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:45.902655  404800 cri.go:89] found id: ""
	I1212 20:36:45.902669  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.902676  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:45.902681  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:45.902739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:45.929017  404800 cri.go:89] found id: ""
	I1212 20:36:45.929031  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.929044  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:45.929050  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:45.929118  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:45.958749  404800 cri.go:89] found id: ""
	I1212 20:36:45.958763  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.958770  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:45.958776  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:45.958837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:45.989217  404800 cri.go:89] found id: ""
	I1212 20:36:45.989239  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.989246  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:45.989252  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:45.989317  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:46.017594  404800 cri.go:89] found id: ""
	I1212 20:36:46.017609  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.017616  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:46.017621  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:46.017681  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:46.047594  404800 cri.go:89] found id: ""
	I1212 20:36:46.047619  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.047628  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:46.047636  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:46.047647  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:46.113115  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:46.113137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:46.128309  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:46.128328  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:46.195035  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:46.195044  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:46.195054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:46.268896  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:46.268917  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:48.800382  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:48.810496  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:48.810556  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:48.835685  404800 cri.go:89] found id: ""
	I1212 20:36:48.835699  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.835706  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:48.835712  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:48.835772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:48.864872  404800 cri.go:89] found id: ""
	I1212 20:36:48.864892  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.864899  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:48.864904  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:48.864969  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:48.889491  404800 cri.go:89] found id: ""
	I1212 20:36:48.889505  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.889512  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:48.889517  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:48.889577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:48.914454  404800 cri.go:89] found id: ""
	I1212 20:36:48.914468  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.914474  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:48.914480  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:48.914533  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:48.938478  404800 cri.go:89] found id: ""
	I1212 20:36:48.938492  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.938499  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:48.938504  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:48.938570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:48.964129  404800 cri.go:89] found id: ""
	I1212 20:36:48.964143  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.964151  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:48.964156  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:48.964221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:48.989666  404800 cri.go:89] found id: ""
	I1212 20:36:48.989680  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.989687  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:48.989695  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:48.989705  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:49.063089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:49.063110  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:49.095579  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:49.095596  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:49.163720  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:49.163740  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:49.178328  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:49.178344  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:49.260325  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:51.761045  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:51.771641  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:51.771702  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:51.797458  404800 cri.go:89] found id: ""
	I1212 20:36:51.797472  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.797479  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:51.797484  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:51.797541  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:51.823244  404800 cri.go:89] found id: ""
	I1212 20:36:51.823268  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.823274  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:51.823279  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:51.823346  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:51.848495  404800 cri.go:89] found id: ""
	I1212 20:36:51.848509  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.848516  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:51.848520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:51.848580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:51.873152  404800 cri.go:89] found id: ""
	I1212 20:36:51.873168  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.873175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:51.873180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:51.873238  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:51.898283  404800 cri.go:89] found id: ""
	I1212 20:36:51.898297  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.898305  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:51.898310  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:51.898370  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:51.924343  404800 cri.go:89] found id: ""
	I1212 20:36:51.924358  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.924386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:51.924392  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:51.924455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:51.949330  404800 cri.go:89] found id: ""
	I1212 20:36:51.949345  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.949352  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:51.949359  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:51.949371  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:52.016304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:52.016326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:52.032963  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:52.032980  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:52.109987  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:52.109999  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:52.110012  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:52.180144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:52.180164  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:54.720069  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:54.730740  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:54.730803  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:54.758017  404800 cri.go:89] found id: ""
	I1212 20:36:54.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.758038  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:54.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:54.758105  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:54.790190  404800 cri.go:89] found id: ""
	I1212 20:36:54.790210  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.790217  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:54.790222  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:54.790281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:54.819974  404800 cri.go:89] found id: ""
	I1212 20:36:54.819989  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.819996  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:54.820001  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:54.820065  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:54.847251  404800 cri.go:89] found id: ""
	I1212 20:36:54.847265  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.847272  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:54.847277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:54.847342  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:54.873168  404800 cri.go:89] found id: ""
	I1212 20:36:54.873182  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.873190  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:54.873195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:54.873262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:54.898145  404800 cri.go:89] found id: ""
	I1212 20:36:54.898160  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.898167  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:54.898175  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:54.898237  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:54.924123  404800 cri.go:89] found id: ""
	I1212 20:36:54.924146  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.924155  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:54.924163  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:54.924173  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:54.989756  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:54.989775  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:55.021117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:55.021137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:55.090802  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:55.090816  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:55.090828  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:55.164266  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:55.164287  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:57.696458  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:57.706599  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:57.706656  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:57.732396  404800 cri.go:89] found id: ""
	I1212 20:36:57.732410  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.732420  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:57.732425  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:57.732485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:57.758017  404800 cri.go:89] found id: ""
	I1212 20:36:57.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.758039  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:57.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:57.758100  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:57.784957  404800 cri.go:89] found id: ""
	I1212 20:36:57.784971  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.784978  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:57.784983  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:57.785044  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:57.810973  404800 cri.go:89] found id: ""
	I1212 20:36:57.810986  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.810993  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:57.810999  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:57.811054  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:57.837384  404800 cri.go:89] found id: ""
	I1212 20:36:57.837398  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.837406  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:57.837411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:57.837487  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:57.863576  404800 cri.go:89] found id: ""
	I1212 20:36:57.863598  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.863605  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:57.863610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:57.863676  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:57.889215  404800 cri.go:89] found id: ""
	I1212 20:36:57.889236  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.889244  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:57.889252  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:57.889263  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:57.956054  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:57.956076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:57.970574  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:57.970590  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:58.038134  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:58.038144  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:58.038160  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:58.109516  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:58.109541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:00.640789  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:00.651136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:00.651196  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:00.678187  404800 cri.go:89] found id: ""
	I1212 20:37:00.678202  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.678209  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:00.678215  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:00.678275  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:00.703384  404800 cri.go:89] found id: ""
	I1212 20:37:00.703400  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.703407  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:00.703412  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:00.703474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:00.735999  404800 cri.go:89] found id: ""
	I1212 20:37:00.736013  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.736020  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:00.736025  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:00.736083  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:00.762232  404800 cri.go:89] found id: ""
	I1212 20:37:00.762246  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.762253  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:00.762258  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:00.762314  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:00.788575  404800 cri.go:89] found id: ""
	I1212 20:37:00.788589  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.788596  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:00.788601  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:00.788663  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:00.815050  404800 cri.go:89] found id: ""
	I1212 20:37:00.815065  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.815081  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:00.815087  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:00.815146  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:00.840166  404800 cri.go:89] found id: ""
	I1212 20:37:00.840180  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.840196  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:00.840205  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:00.840216  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:00.905766  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:00.905787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:00.920612  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:00.920631  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:00.987903  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:00.987914  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:00.987926  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:01.058125  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:01.058146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.588584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:03.599133  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:03.599202  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:03.629322  404800 cri.go:89] found id: ""
	I1212 20:37:03.629336  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.629343  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:03.629348  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:03.629410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:03.654415  404800 cri.go:89] found id: ""
	I1212 20:37:03.654429  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.654436  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:03.654443  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:03.654499  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:03.679922  404800 cri.go:89] found id: ""
	I1212 20:37:03.679937  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.679944  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:03.679950  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:03.680015  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:03.706619  404800 cri.go:89] found id: ""
	I1212 20:37:03.706634  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.706640  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:03.706646  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:03.706707  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:03.733101  404800 cri.go:89] found id: ""
	I1212 20:37:03.733116  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.733123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:03.733128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:03.733189  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:03.758431  404800 cri.go:89] found id: ""
	I1212 20:37:03.758445  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.758452  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:03.758457  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:03.758520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:03.789138  404800 cri.go:89] found id: ""
	I1212 20:37:03.789152  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.789159  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:03.789166  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:03.789177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:03.852394  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:03.852404  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:03.852415  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:03.921263  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:03.921283  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.950006  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:03.950022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:04.020715  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:04.020739  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.536553  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:06.547113  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:06.547176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:06.575862  404800 cri.go:89] found id: ""
	I1212 20:37:06.575876  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.575883  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:06.575888  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:06.575947  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:06.601781  404800 cri.go:89] found id: ""
	I1212 20:37:06.601796  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.601803  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:06.601808  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:06.601868  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:06.627486  404800 cri.go:89] found id: ""
	I1212 20:37:06.627500  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.627507  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:06.627520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:06.627577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:06.656432  404800 cri.go:89] found id: ""
	I1212 20:37:06.656446  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.656454  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:06.656465  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:06.656526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:06.681705  404800 cri.go:89] found id: ""
	I1212 20:37:06.681719  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.681726  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:06.681731  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:06.681794  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:06.707068  404800 cri.go:89] found id: ""
	I1212 20:37:06.707083  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.707090  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:06.707095  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:06.707157  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:06.734286  404800 cri.go:89] found id: ""
	I1212 20:37:06.734300  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.734307  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:06.734314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:06.734324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:06.799595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:06.799616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.814521  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:06.814543  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:06.881453  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:06.881463  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:06.881474  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:06.950345  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:06.950365  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:09.488970  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:09.500875  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:09.500940  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:09.529418  404800 cri.go:89] found id: ""
	I1212 20:37:09.529433  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.529439  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:09.529445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:09.529505  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:09.559685  404800 cri.go:89] found id: ""
	I1212 20:37:09.559700  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.559707  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:09.559712  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:09.559772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:09.587781  404800 cri.go:89] found id: ""
	I1212 20:37:09.587796  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.587802  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:09.587807  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:09.587869  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:09.613804  404800 cri.go:89] found id: ""
	I1212 20:37:09.613820  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.613826  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:09.613832  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:09.613903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:09.639550  404800 cri.go:89] found id: ""
	I1212 20:37:09.639566  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.639573  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:09.639578  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:09.639644  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:09.669938  404800 cri.go:89] found id: ""
	I1212 20:37:09.669953  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.669960  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:09.669965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:09.670025  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:09.696771  404800 cri.go:89] found id: ""
	I1212 20:37:09.696785  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.696799  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:09.696807  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:09.696818  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:09.763319  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:09.763340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:09.778782  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:09.778799  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:09.846376  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:09.846385  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:09.846396  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:09.917476  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:09.917497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.447817  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:12.457978  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:12.458042  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:12.491473  404800 cri.go:89] found id: ""
	I1212 20:37:12.491487  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.491495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:12.491500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:12.491559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:12.522865  404800 cri.go:89] found id: ""
	I1212 20:37:12.522881  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.522888  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:12.522892  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:12.522959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:12.548498  404800 cri.go:89] found id: ""
	I1212 20:37:12.548514  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.548521  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:12.548526  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:12.548592  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:12.579700  404800 cri.go:89] found id: ""
	I1212 20:37:12.579714  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.579721  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:12.579726  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:12.579791  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:12.606849  404800 cri.go:89] found id: ""
	I1212 20:37:12.606863  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.606870  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:12.606878  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:12.606942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:12.632352  404800 cri.go:89] found id: ""
	I1212 20:37:12.632386  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.632394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:12.632400  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:12.632464  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:12.657776  404800 cri.go:89] found id: ""
	I1212 20:37:12.657791  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.657798  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:12.657805  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:12.657816  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:12.672067  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:12.672083  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:12.744080  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:12.744093  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:12.744103  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:12.811395  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:12.811414  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.839843  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:12.839862  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.405601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:15.417051  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:15.417110  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:15.442503  404800 cri.go:89] found id: ""
	I1212 20:37:15.442517  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.442524  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:15.442530  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:15.442588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:15.483736  404800 cri.go:89] found id: ""
	I1212 20:37:15.483763  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.483770  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:15.483775  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:15.483843  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:15.515671  404800 cri.go:89] found id: ""
	I1212 20:37:15.515685  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.515692  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:15.515697  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:15.515764  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:15.548136  404800 cri.go:89] found id: ""
	I1212 20:37:15.548151  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.548158  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:15.548163  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:15.548221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:15.576936  404800 cri.go:89] found id: ""
	I1212 20:37:15.576951  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.576958  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:15.576962  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:15.577022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:15.603608  404800 cri.go:89] found id: ""
	I1212 20:37:15.603622  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.603629  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:15.603634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:15.603689  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:15.638105  404800 cri.go:89] found id: ""
	I1212 20:37:15.638125  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.638133  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:15.638140  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:15.638150  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.708493  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:15.708513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:15.723827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:15.723851  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:15.792302  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:15.792314  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:15.792326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:15.860772  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:15.860796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:18.397462  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:18.407317  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:18.407382  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:18.433353  404800 cri.go:89] found id: ""
	I1212 20:37:18.433368  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.433375  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:18.433379  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:18.433435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:18.465547  404800 cri.go:89] found id: ""
	I1212 20:37:18.465561  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.465568  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:18.465572  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:18.465629  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:18.498811  404800 cri.go:89] found id: ""
	I1212 20:37:18.498825  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.498832  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:18.498837  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:18.498894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:18.525729  404800 cri.go:89] found id: ""
	I1212 20:37:18.525745  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.525752  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:18.525758  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:18.525820  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:18.555807  404800 cri.go:89] found id: ""
	I1212 20:37:18.555822  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.555829  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:18.555834  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:18.555890  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:18.586968  404800 cri.go:89] found id: ""
	I1212 20:37:18.586982  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.586989  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:18.586994  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:18.587048  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:18.613654  404800 cri.go:89] found id: ""
	I1212 20:37:18.613668  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.613675  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:18.613683  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:18.613694  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:18.685435  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:18.685464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:18.701543  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:18.701560  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:18.771148  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:18.771159  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:18.771169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:18.840302  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:18.840324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.370649  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:21.380730  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:21.380785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:21.407262  404800 cri.go:89] found id: ""
	I1212 20:37:21.407277  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.407285  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:21.407290  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:21.407353  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:21.431725  404800 cri.go:89] found id: ""
	I1212 20:37:21.431741  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.431748  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:21.431753  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:21.431808  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:21.462830  404800 cri.go:89] found id: ""
	I1212 20:37:21.462844  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.462851  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:21.462856  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:21.462914  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:21.490038  404800 cri.go:89] found id: ""
	I1212 20:37:21.490053  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.490060  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:21.490066  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:21.490123  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:21.522135  404800 cri.go:89] found id: ""
	I1212 20:37:21.522152  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.522165  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:21.522170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:21.522243  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:21.550272  404800 cri.go:89] found id: ""
	I1212 20:37:21.550286  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.550293  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:21.550298  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:21.550352  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:21.575855  404800 cri.go:89] found id: ""
	I1212 20:37:21.575868  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.575875  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:21.575882  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:21.575892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:21.643213  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:21.643234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.676057  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:21.676076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:21.746870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:21.746890  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:21.762368  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:21.762383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:21.829472  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.331150  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:24.341451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:24.341509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:24.365339  404800 cri.go:89] found id: ""
	I1212 20:37:24.365354  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.365362  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:24.365367  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:24.365430  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:24.392822  404800 cri.go:89] found id: ""
	I1212 20:37:24.392837  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.392844  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:24.392849  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:24.392941  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:24.419333  404800 cri.go:89] found id: ""
	I1212 20:37:24.419347  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.419354  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:24.419365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:24.419422  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:24.444927  404800 cri.go:89] found id: ""
	I1212 20:37:24.444940  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.444947  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:24.444952  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:24.445014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:24.479382  404800 cri.go:89] found id: ""
	I1212 20:37:24.479411  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.479422  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:24.479427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:24.479496  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:24.519373  404800 cri.go:89] found id: ""
	I1212 20:37:24.519387  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.519394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:24.519399  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:24.519458  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:24.546714  404800 cri.go:89] found id: ""
	I1212 20:37:24.546729  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.546736  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:24.546744  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:24.546755  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:24.612546  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:24.612568  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:24.627419  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:24.627435  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:24.695735  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.695745  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:24.695757  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:24.764903  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:24.764929  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:27.295998  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:27.306158  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:27.306222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:27.331510  404800 cri.go:89] found id: ""
	I1212 20:37:27.331524  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.331532  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:27.331549  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:27.331608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:27.357120  404800 cri.go:89] found id: ""
	I1212 20:37:27.357134  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.357141  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:27.357146  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:27.357227  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:27.383390  404800 cri.go:89] found id: ""
	I1212 20:37:27.383404  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.383411  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:27.383416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:27.383471  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:27.408672  404800 cri.go:89] found id: ""
	I1212 20:37:27.408687  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.408695  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:27.408699  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:27.408758  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:27.434453  404800 cri.go:89] found id: ""
	I1212 20:37:27.434467  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.434478  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:27.434483  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:27.434542  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:27.467590  404800 cri.go:89] found id: ""
	I1212 20:37:27.467603  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.467610  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:27.467615  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:27.467672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:27.501872  404800 cri.go:89] found id: ""
	I1212 20:37:27.501886  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.501893  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:27.501900  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:27.501912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:27.574950  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:27.574971  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:27.590147  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:27.590163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:27.659572  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:27.659583  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:27.659594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:27.728089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:27.728111  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.260552  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:30.272906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:30.272984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:30.302879  404800 cri.go:89] found id: ""
	I1212 20:37:30.302903  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.302911  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:30.302916  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:30.302993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:30.332792  404800 cri.go:89] found id: ""
	I1212 20:37:30.332807  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.332814  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:30.332819  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:30.332877  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:30.359283  404800 cri.go:89] found id: ""
	I1212 20:37:30.359298  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.359306  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:30.359311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:30.359369  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:30.385609  404800 cri.go:89] found id: ""
	I1212 20:37:30.385624  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.385643  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:30.385649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:30.385709  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:30.410328  404800 cri.go:89] found id: ""
	I1212 20:37:30.410343  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.410358  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:30.410362  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:30.410423  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:30.435005  404800 cri.go:89] found id: ""
	I1212 20:37:30.435019  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.435026  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:30.435031  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:30.435089  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:30.474088  404800 cri.go:89] found id: ""
	I1212 20:37:30.474102  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.474109  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:30.474116  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:30.474127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.508894  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:30.508918  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:30.583876  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:30.583895  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:30.599205  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:30.599229  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:30.667713  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:30.667723  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:30.667749  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.236428  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:33.246549  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:33.246607  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:33.272236  404800 cri.go:89] found id: ""
	I1212 20:37:33.272250  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.272257  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:33.272262  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:33.272324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:33.297982  404800 cri.go:89] found id: ""
	I1212 20:37:33.297997  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.298004  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:33.298009  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:33.298068  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:33.324170  404800 cri.go:89] found id: ""
	I1212 20:37:33.324183  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.324190  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:33.324195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:33.324252  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:33.350869  404800 cri.go:89] found id: ""
	I1212 20:37:33.350883  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.350890  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:33.350895  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:33.350950  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:33.376336  404800 cri.go:89] found id: ""
	I1212 20:37:33.376352  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.376360  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:33.376384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:33.376446  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:33.402358  404800 cri.go:89] found id: ""
	I1212 20:37:33.402371  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.402378  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:33.402384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:33.402444  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:33.428067  404800 cri.go:89] found id: ""
	I1212 20:37:33.428081  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.428088  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:33.428104  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:33.428114  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.498721  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:33.498744  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:33.532343  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:33.532362  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:33.601583  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:33.601603  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:33.616929  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:33.616947  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:33.680299  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.180540  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:36.191300  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:36.191360  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:36.219483  404800 cri.go:89] found id: ""
	I1212 20:37:36.219498  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.219505  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:36.219511  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:36.219569  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:36.246240  404800 cri.go:89] found id: ""
	I1212 20:37:36.246255  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.246262  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:36.246267  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:36.246326  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:36.272949  404800 cri.go:89] found id: ""
	I1212 20:37:36.272962  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.272969  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:36.272975  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:36.273038  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:36.298716  404800 cri.go:89] found id: ""
	I1212 20:37:36.298731  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.298738  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:36.298743  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:36.298798  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:36.325228  404800 cri.go:89] found id: ""
	I1212 20:37:36.325242  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.325249  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:36.325254  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:36.325312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:36.350322  404800 cri.go:89] found id: ""
	I1212 20:37:36.350337  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.350344  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:36.350350  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:36.350406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:36.380083  404800 cri.go:89] found id: ""
	I1212 20:37:36.380097  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.380104  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:36.380117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:36.380128  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:36.442887  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.442899  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:36.442910  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:36.514571  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:36.514592  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:36.549020  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:36.549036  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:36.615002  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:36.615023  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.129960  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:39.139842  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:39.139903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:39.164988  404800 cri.go:89] found id: ""
	I1212 20:37:39.165003  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.165010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:39.165014  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:39.165072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:39.195151  404800 cri.go:89] found id: ""
	I1212 20:37:39.195166  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.195172  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:39.195177  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:39.195235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:39.223301  404800 cri.go:89] found id: ""
	I1212 20:37:39.223315  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.223322  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:39.223327  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:39.223384  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:39.248078  404800 cri.go:89] found id: ""
	I1212 20:37:39.248093  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.248100  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:39.248105  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:39.248162  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:39.272363  404800 cri.go:89] found id: ""
	I1212 20:37:39.272403  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.272411  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:39.272415  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:39.272474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:39.297353  404800 cri.go:89] found id: ""
	I1212 20:37:39.297367  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.297374  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:39.297379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:39.297437  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:39.322842  404800 cri.go:89] found id: ""
	I1212 20:37:39.322855  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.322863  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:39.322870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:39.322881  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.337445  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:39.337460  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:39.398684  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:39.398694  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:39.398704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:39.472608  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:39.472628  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:39.511488  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:39.517700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.092404  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:42.104757  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:42.104826  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:42.137172  404800 cri.go:89] found id: ""
	I1212 20:37:42.137189  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.137198  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:42.137204  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:42.137277  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:42.168320  404800 cri.go:89] found id: ""
	I1212 20:37:42.168336  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.168344  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:42.168349  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:42.168455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:42.202618  404800 cri.go:89] found id: ""
	I1212 20:37:42.202633  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.202641  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:42.202647  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:42.202714  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:42.232011  404800 cri.go:89] found id: ""
	I1212 20:37:42.232026  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.232034  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:42.232039  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:42.232101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:42.260345  404800 cri.go:89] found id: ""
	I1212 20:37:42.260360  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.260398  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:42.260403  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:42.260465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:42.286857  404800 cri.go:89] found id: ""
	I1212 20:37:42.286882  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.286890  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:42.286898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:42.286968  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:42.314846  404800 cri.go:89] found id: ""
	I1212 20:37:42.314870  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.314877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:42.314885  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:42.314898  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.382203  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:42.382223  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:42.397537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:42.397554  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:42.463930  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:42.463940  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:42.463951  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:42.539788  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:42.539809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:45.073125  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:45.091416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:45.091491  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:45.126675  404800 cri.go:89] found id: ""
	I1212 20:37:45.126699  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.126707  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:45.126714  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:45.126789  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:45.167457  404800 cri.go:89] found id: ""
	I1212 20:37:45.167475  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.167483  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:45.167489  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:45.167559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:45.226232  404800 cri.go:89] found id: ""
	I1212 20:37:45.226264  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.226292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:45.226299  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:45.226372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:45.273410  404800 cri.go:89] found id: ""
	I1212 20:37:45.273427  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.273435  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:45.273441  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:45.273513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:45.313155  404800 cri.go:89] found id: ""
	I1212 20:37:45.313171  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.313178  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:45.313183  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:45.313253  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:45.345614  404800 cri.go:89] found id: ""
	I1212 20:37:45.345640  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.345669  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:45.345688  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:45.345851  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:45.375592  404800 cri.go:89] found id: ""
	I1212 20:37:45.375606  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.375614  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:45.375622  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:45.375633  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:45.446441  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:45.446461  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:45.463226  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:45.463243  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:45.540934  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:45.540944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:45.540955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:45.610027  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:45.610051  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:48.142953  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:48.153422  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:48.153489  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:48.182170  404800 cri.go:89] found id: ""
	I1212 20:37:48.182185  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.182192  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:48.182197  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:48.182255  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:48.207474  404800 cri.go:89] found id: ""
	I1212 20:37:48.207498  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.207506  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:48.207511  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:48.207588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:48.232357  404800 cri.go:89] found id: ""
	I1212 20:37:48.232391  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.232399  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:48.232404  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:48.232472  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:48.257989  404800 cri.go:89] found id: ""
	I1212 20:37:48.258016  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.258024  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:48.258029  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:48.258095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:48.282918  404800 cri.go:89] found id: ""
	I1212 20:37:48.282932  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.282940  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:48.282945  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:48.283008  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:48.309285  404800 cri.go:89] found id: ""
	I1212 20:37:48.309299  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.309306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:48.309311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:48.309367  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:48.335545  404800 cri.go:89] found id: ""
	I1212 20:37:48.335559  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.335566  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:48.335573  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:48.335586  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:48.401770  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:48.401789  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:48.416320  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:48.416336  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:48.501926  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:48.501944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:48.501955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:48.576534  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:48.576555  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:51.105155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:51.115964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:51.116028  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:51.145401  404800 cri.go:89] found id: ""
	I1212 20:37:51.145416  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.145433  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:51.145445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:51.145517  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:51.172664  404800 cri.go:89] found id: ""
	I1212 20:37:51.172679  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.172685  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:51.172690  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:51.172753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:51.198093  404800 cri.go:89] found id: ""
	I1212 20:37:51.198108  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.198115  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:51.198120  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:51.198179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:51.223420  404800 cri.go:89] found id: ""
	I1212 20:37:51.223433  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.223449  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:51.223454  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:51.223510  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:51.253134  404800 cri.go:89] found id: ""
	I1212 20:37:51.253157  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.253164  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:51.253170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:51.253236  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:51.278738  404800 cri.go:89] found id: ""
	I1212 20:37:51.278753  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.278761  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:51.278766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:51.278821  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:51.304296  404800 cri.go:89] found id: ""
	I1212 20:37:51.304311  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.304318  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:51.304325  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:51.304346  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:51.370289  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:51.370308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:51.385101  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:51.385116  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:51.449107  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:51.449117  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:51.449127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:51.519024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:51.519047  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:54.054216  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:54.064710  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:54.064769  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:54.091620  404800 cri.go:89] found id: ""
	I1212 20:37:54.091634  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.091641  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:54.091646  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:54.091701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:54.122000  404800 cri.go:89] found id: ""
	I1212 20:37:54.122013  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.122020  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:54.122025  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:54.122081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:54.151439  404800 cri.go:89] found id: ""
	I1212 20:37:54.151454  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.151461  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:54.151466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:54.151520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:54.180154  404800 cri.go:89] found id: ""
	I1212 20:37:54.180168  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.180175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:54.180180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:54.180235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:54.206927  404800 cri.go:89] found id: ""
	I1212 20:37:54.206947  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.206954  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:54.206959  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:54.207014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:54.231274  404800 cri.go:89] found id: ""
	I1212 20:37:54.231288  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.231306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:54.231312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:54.231366  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:54.259379  404800 cri.go:89] found id: ""
	I1212 20:37:54.259395  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.259402  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:54.259410  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:54.259420  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:54.325217  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:54.325237  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:54.339913  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:54.339930  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:54.403764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:54.403774  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:54.403786  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:54.474019  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:54.474039  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.003568  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:57.016502  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:57.016560  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:57.042988  404800 cri.go:89] found id: ""
	I1212 20:37:57.043003  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.043010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:57.043015  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:57.043072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:57.071640  404800 cri.go:89] found id: ""
	I1212 20:37:57.071654  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.071661  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:57.071666  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:57.071737  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:57.098101  404800 cri.go:89] found id: ""
	I1212 20:37:57.098115  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.098123  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:57.098128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:57.098185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:57.128276  404800 cri.go:89] found id: ""
	I1212 20:37:57.128300  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.128307  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:57.128312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:57.128432  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:57.158908  404800 cri.go:89] found id: ""
	I1212 20:37:57.158922  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.158930  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:57.158939  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:57.159004  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:57.186146  404800 cri.go:89] found id: ""
	I1212 20:37:57.186161  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.186169  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:57.186174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:57.186233  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:57.210969  404800 cri.go:89] found id: ""
	I1212 20:37:57.210984  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.210991  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:57.210999  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:57.211017  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:57.225391  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:57.225407  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:57.289597  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:57.289607  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:57.289617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:57.362750  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:57.362771  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.396453  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:57.396470  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:59.967653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:59.977921  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:59.977984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:00.032267  404800 cri.go:89] found id: ""
	I1212 20:38:00.032297  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.032306  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:00.032312  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:00.032410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:00.203733  404800 cri.go:89] found id: ""
	I1212 20:38:00.203752  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.203760  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:00.203766  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:00.203831  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:00.252579  404800 cri.go:89] found id: ""
	I1212 20:38:00.252596  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.252604  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:00.252610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:00.252678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:00.301983  404800 cri.go:89] found id: ""
	I1212 20:38:00.302000  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.302009  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:00.302014  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:00.302081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:00.336785  404800 cri.go:89] found id: ""
	I1212 20:38:00.336813  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.336821  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:00.336827  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:00.336905  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:00.369703  404800 cri.go:89] found id: ""
	I1212 20:38:00.369720  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.369728  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:00.369749  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:00.369837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:00.404624  404800 cri.go:89] found id: ""
	I1212 20:38:00.404641  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.404649  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:00.404657  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:00.404669  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:00.473595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:00.473616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:00.493555  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:00.493572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:00.568400  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:00.568411  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:00.568425  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:00.641391  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:00.641416  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:03.171500  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:03.182094  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:03.182153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:03.207380  404800 cri.go:89] found id: ""
	I1212 20:38:03.207395  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.207402  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:03.207407  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:03.207465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:03.232766  404800 cri.go:89] found id: ""
	I1212 20:38:03.232781  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.232788  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:03.232793  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:03.232856  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:03.263589  404800 cri.go:89] found id: ""
	I1212 20:38:03.263604  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.263611  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:03.263620  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:03.263678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:03.289719  404800 cri.go:89] found id: ""
	I1212 20:38:03.289734  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.289741  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:03.289755  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:03.289815  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:03.316755  404800 cri.go:89] found id: ""
	I1212 20:38:03.316770  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.316778  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:03.316783  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:03.316845  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:03.344424  404800 cri.go:89] found id: ""
	I1212 20:38:03.344438  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.344445  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:03.344451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:03.344508  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:03.371242  404800 cri.go:89] found id: ""
	I1212 20:38:03.371257  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.371265  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:03.371273  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:03.371284  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:03.439155  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:03.439177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:03.456896  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:03.456912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:03.536136  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:03.536146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:03.536159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:03.610647  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:03.610666  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:06.146575  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:06.157383  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:06.157441  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:06.183306  404800 cri.go:89] found id: ""
	I1212 20:38:06.183321  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.183329  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:06.183334  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:06.183393  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:06.210325  404800 cri.go:89] found id: ""
	I1212 20:38:06.210340  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.210348  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:06.210353  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:06.210411  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:06.235611  404800 cri.go:89] found id: ""
	I1212 20:38:06.235625  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.235632  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:06.235638  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:06.235699  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:06.261846  404800 cri.go:89] found id: ""
	I1212 20:38:06.261860  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.261867  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:06.261872  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:06.261938  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:06.290103  404800 cri.go:89] found id: ""
	I1212 20:38:06.290116  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.290123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:06.290128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:06.290185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:06.316022  404800 cri.go:89] found id: ""
	I1212 20:38:06.316037  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.316044  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:06.316049  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:06.316107  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:06.342973  404800 cri.go:89] found id: ""
	I1212 20:38:06.342988  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.342996  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:06.343004  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:06.343015  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:06.413249  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:06.413270  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:06.428467  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:06.428492  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:06.521492  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:06.521503  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:06.521513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:06.591077  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:06.591100  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.125976  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:09.136849  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:09.136908  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:09.163513  404800 cri.go:89] found id: ""
	I1212 20:38:09.163528  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.163535  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:09.163541  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:09.163603  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:09.194011  404800 cri.go:89] found id: ""
	I1212 20:38:09.194026  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.194033  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:09.194038  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:09.194098  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:09.223187  404800 cri.go:89] found id: ""
	I1212 20:38:09.223201  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.223214  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:09.223219  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:09.223278  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:09.253410  404800 cri.go:89] found id: ""
	I1212 20:38:09.253424  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.253431  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:09.253436  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:09.253509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:09.278330  404800 cri.go:89] found id: ""
	I1212 20:38:09.278344  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.278351  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:09.278356  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:09.278416  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:09.307840  404800 cri.go:89] found id: ""
	I1212 20:38:09.307854  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.307861  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:09.307866  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:09.307924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:09.335632  404800 cri.go:89] found id: ""
	I1212 20:38:09.335646  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.335653  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:09.335660  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:09.335671  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:09.406024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:09.406045  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.434314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:09.434331  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:09.515858  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:09.515880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:09.532868  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:09.532885  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:09.599150  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.099436  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:12.110285  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:12.110345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:12.135810  404800 cri.go:89] found id: ""
	I1212 20:38:12.135825  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.135832  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:12.135837  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:12.135897  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:12.160429  404800 cri.go:89] found id: ""
	I1212 20:38:12.160444  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.160451  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:12.160456  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:12.160511  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:12.187065  404800 cri.go:89] found id: ""
	I1212 20:38:12.187080  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.187087  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:12.187092  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:12.187154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:12.212658  404800 cri.go:89] found id: ""
	I1212 20:38:12.212673  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.212681  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:12.212686  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:12.212743  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:12.238821  404800 cri.go:89] found id: ""
	I1212 20:38:12.238836  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.238843  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:12.238848  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:12.238909  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:12.265300  404800 cri.go:89] found id: ""
	I1212 20:38:12.265315  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.265322  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:12.265332  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:12.265392  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:12.292396  404800 cri.go:89] found id: ""
	I1212 20:38:12.292410  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.292418  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:12.292435  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:12.292445  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:12.358716  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:12.358736  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:12.374039  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:12.374056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:12.438679  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.438690  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:12.438701  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:12.519199  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:12.519218  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:15.058664  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:15.078525  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:15.078590  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:15.105060  404800 cri.go:89] found id: ""
	I1212 20:38:15.105075  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.105082  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:15.105088  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:15.105153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:15.133041  404800 cri.go:89] found id: ""
	I1212 20:38:15.133056  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.133063  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:15.133068  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:15.133133  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:15.160326  404800 cri.go:89] found id: ""
	I1212 20:38:15.160340  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.160347  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:15.160353  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:15.160435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:15.187814  404800 cri.go:89] found id: ""
	I1212 20:38:15.187828  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.187835  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:15.187840  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:15.187900  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:15.227819  404800 cri.go:89] found id: ""
	I1212 20:38:15.227833  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.227839  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:15.227844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:15.227901  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:15.255383  404800 cri.go:89] found id: ""
	I1212 20:38:15.255398  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.255404  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:15.255410  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:15.255468  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:15.280977  404800 cri.go:89] found id: ""
	I1212 20:38:15.280991  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.280997  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:15.281005  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:15.281022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:15.347810  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:15.347832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:15.362524  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:15.362541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:15.427106  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:15.427116  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:15.427127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:15.497224  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:15.497244  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:18.029289  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:18.044111  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:18.044210  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:18.071723  404800 cri.go:89] found id: ""
	I1212 20:38:18.071737  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.071745  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:18.071750  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:18.071810  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:18.099105  404800 cri.go:89] found id: ""
	I1212 20:38:18.099119  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.099126  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:18.099131  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:18.099187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:18.123656  404800 cri.go:89] found id: ""
	I1212 20:38:18.123670  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.123677  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:18.123682  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:18.123739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:18.150020  404800 cri.go:89] found id: ""
	I1212 20:38:18.150033  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.150040  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:18.150045  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:18.150101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:18.174527  404800 cri.go:89] found id: ""
	I1212 20:38:18.174541  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.174548  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:18.174552  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:18.174608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:18.198686  404800 cri.go:89] found id: ""
	I1212 20:38:18.198701  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.198716  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:18.198722  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:18.198779  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:18.223482  404800 cri.go:89] found id: ""
	I1212 20:38:18.223496  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.223512  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:18.223521  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:18.223531  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:18.289154  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:18.289176  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:18.303954  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:18.303970  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:18.371467  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:18.371477  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:18.371493  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:18.440117  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:18.440138  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:20.983282  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:20.993766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:20.993829  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:21.020992  404800 cri.go:89] found id: ""
	I1212 20:38:21.021006  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.021014  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:21.021019  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:21.021081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:21.047844  404800 cri.go:89] found id: ""
	I1212 20:38:21.047857  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.047865  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:21.047869  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:21.047930  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:21.073011  404800 cri.go:89] found id: ""
	I1212 20:38:21.073025  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.073033  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:21.073038  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:21.073095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:21.098802  404800 cri.go:89] found id: ""
	I1212 20:38:21.098816  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.098823  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:21.098829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:21.098884  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:21.127579  404800 cri.go:89] found id: ""
	I1212 20:38:21.127594  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.127601  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:21.127606  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:21.127672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:21.154921  404800 cri.go:89] found id: ""
	I1212 20:38:21.154935  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.154942  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:21.154947  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:21.155001  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:21.181275  404800 cri.go:89] found id: ""
	I1212 20:38:21.181290  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.181297  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:21.181304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:21.181316  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:21.197100  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:21.197118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:21.263963  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:21.263974  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:21.263991  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:21.335974  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:21.335994  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:21.364201  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:21.364220  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:23.937090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:23.947413  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:23.947474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:23.973243  404800 cri.go:89] found id: ""
	I1212 20:38:23.973258  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.973265  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:23.973270  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:23.973324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:23.999530  404800 cri.go:89] found id: ""
	I1212 20:38:23.999545  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.999552  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:23.999557  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:23.999616  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:24.030165  404800 cri.go:89] found id: ""
	I1212 20:38:24.030180  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.030187  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:24.030193  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:24.030254  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:24.059776  404800 cri.go:89] found id: ""
	I1212 20:38:24.059792  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.059799  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:24.059804  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:24.059882  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:24.086292  404800 cri.go:89] found id: ""
	I1212 20:38:24.086306  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.086330  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:24.086338  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:24.086427  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:24.112150  404800 cri.go:89] found id: ""
	I1212 20:38:24.112164  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.112180  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:24.112185  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:24.112240  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:24.137517  404800 cri.go:89] found id: ""
	I1212 20:38:24.137532  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.137539  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:24.137547  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:24.137557  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:24.207037  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:24.207056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:24.222129  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:24.222144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:24.288581  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:24.288595  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:24.288605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:24.357884  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:24.357903  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:26.887217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:26.897518  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:26.897580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:26.926965  404800 cri.go:89] found id: ""
	I1212 20:38:26.926980  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.926987  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:26.926992  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:26.927052  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:26.952974  404800 cri.go:89] found id: ""
	I1212 20:38:26.952988  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.952995  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:26.953000  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:26.953060  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:26.978786  404800 cri.go:89] found id: ""
	I1212 20:38:26.978801  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.978808  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:26.978813  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:26.978870  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:27.008564  404800 cri.go:89] found id: ""
	I1212 20:38:27.008580  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.008590  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:27.008595  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:27.008659  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:27.036286  404800 cri.go:89] found id: ""
	I1212 20:38:27.036301  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.036308  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:27.036313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:27.036391  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:27.061515  404800 cri.go:89] found id: ""
	I1212 20:38:27.061529  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.061536  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:27.061541  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:27.061604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:27.090603  404800 cri.go:89] found id: ""
	I1212 20:38:27.090617  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.090624  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:27.090632  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:27.090642  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:27.159097  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:27.159107  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:27.159118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:27.228300  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:27.228321  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:27.258850  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:27.258867  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:27.328117  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:27.328139  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:29.843406  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:29.853466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:29.853526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:29.878238  404800 cri.go:89] found id: ""
	I1212 20:38:29.878253  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.878260  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:29.878265  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:29.878323  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:29.907469  404800 cri.go:89] found id: ""
	I1212 20:38:29.907483  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.907490  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:29.907495  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:29.907550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:29.932873  404800 cri.go:89] found id: ""
	I1212 20:38:29.932887  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.932894  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:29.932900  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:29.932962  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:29.958139  404800 cri.go:89] found id: ""
	I1212 20:38:29.958153  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.958160  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:29.958165  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:29.958222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:29.984390  404800 cri.go:89] found id: ""
	I1212 20:38:29.984405  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.984412  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:29.984416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:29.984474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:30.027335  404800 cri.go:89] found id: ""
	I1212 20:38:30.027351  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.027360  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:30.027365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:30.027440  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:30.094850  404800 cri.go:89] found id: ""
	I1212 20:38:30.094867  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.094883  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:30.094911  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:30.094939  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:30.129199  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:30.129217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:30.196813  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:30.196832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:30.212809  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:30.212829  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:30.281108  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:30.281119  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:30.281130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:32.853025  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:32.863369  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:32.863434  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:32.890487  404800 cri.go:89] found id: ""
	I1212 20:38:32.890501  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.890508  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:32.890513  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:32.890570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:32.915071  404800 cri.go:89] found id: ""
	I1212 20:38:32.915085  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.915093  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:32.915098  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:32.915155  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:32.940096  404800 cri.go:89] found id: ""
	I1212 20:38:32.940117  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.940131  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:32.940142  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:32.940234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:32.965615  404800 cri.go:89] found id: ""
	I1212 20:38:32.965629  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.965644  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:32.965649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:32.965705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:32.990438  404800 cri.go:89] found id: ""
	I1212 20:38:32.990452  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.990459  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:32.990466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:32.990527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:33.018112  404800 cri.go:89] found id: ""
	I1212 20:38:33.018134  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.018141  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:33.018146  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:33.018213  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:33.045014  404800 cri.go:89] found id: ""
	I1212 20:38:33.045029  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.045036  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:33.045043  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:33.045054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:33.116627  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:33.116649  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:33.131589  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:33.131605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:33.200143  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:33.200152  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:33.200165  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:33.270338  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:33.270359  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:35.806115  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:35.816131  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:35.816187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:35.841646  404800 cri.go:89] found id: ""
	I1212 20:38:35.841660  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.841667  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:35.841672  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:35.841728  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:35.871233  404800 cri.go:89] found id: ""
	I1212 20:38:35.871247  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.871254  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:35.871259  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:35.871316  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:35.896270  404800 cri.go:89] found id: ""
	I1212 20:38:35.896285  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.896292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:35.896297  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:35.896354  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:35.923679  404800 cri.go:89] found id: ""
	I1212 20:38:35.923693  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.923700  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:35.923705  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:35.923796  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:35.950841  404800 cri.go:89] found id: ""
	I1212 20:38:35.950856  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.950862  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:35.950867  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:35.950924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:35.981198  404800 cri.go:89] found id: ""
	I1212 20:38:35.981212  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.981219  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:35.981224  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:35.981282  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:36.016848  404800 cri.go:89] found id: ""
	I1212 20:38:36.016865  404800 logs.go:282] 0 containers: []
	W1212 20:38:36.016872  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:36.016881  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:36.016892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:36.085541  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:36.085562  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:36.100886  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:36.100904  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:36.169874  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:36.169886  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:36.169897  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:36.239866  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:36.239886  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:38.770757  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:38.781375  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:38.781433  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:38.809421  404800 cri.go:89] found id: ""
	I1212 20:38:38.809436  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.809443  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:38.809448  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:38.809506  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:38.839566  404800 cri.go:89] found id: ""
	I1212 20:38:38.839579  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.839586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:38.839591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:38.839652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:38.865187  404800 cri.go:89] found id: ""
	I1212 20:38:38.865201  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.865208  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:38.865213  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:38.865272  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:38.890808  404800 cri.go:89] found id: ""
	I1212 20:38:38.890822  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.890829  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:38.890835  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:38.890891  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:38.917091  404800 cri.go:89] found id: ""
	I1212 20:38:38.917104  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.917117  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:38.917122  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:38.917179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:38.942942  404800 cri.go:89] found id: ""
	I1212 20:38:38.942957  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.942964  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:38.942970  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:38.943030  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:38.973257  404800 cri.go:89] found id: ""
	I1212 20:38:38.973271  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.973278  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:38.973286  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:38.973296  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:39.043336  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:39.043356  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:39.072568  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:39.072588  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:39.140916  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:39.140937  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:39.157933  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:39.157949  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:39.223417  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:41.723637  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:41.734660  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:41.734716  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:41.767247  404800 cri.go:89] found id: ""
	I1212 20:38:41.767262  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.767269  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:41.767275  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:41.767328  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:41.796221  404800 cri.go:89] found id: ""
	I1212 20:38:41.796235  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.796248  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:41.796253  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:41.796312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:41.821187  404800 cri.go:89] found id: ""
	I1212 20:38:41.821203  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.821216  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:41.821221  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:41.821284  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:41.847287  404800 cri.go:89] found id: ""
	I1212 20:38:41.847301  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.847308  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:41.847313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:41.847372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:41.872067  404800 cri.go:89] found id: ""
	I1212 20:38:41.872082  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.872089  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:41.872093  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:41.872152  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:41.897796  404800 cri.go:89] found id: ""
	I1212 20:38:41.897811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.897818  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:41.897823  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:41.897881  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:41.923795  404800 cri.go:89] found id: ""
	I1212 20:38:41.923811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.923818  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:41.923825  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:41.923836  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:41.990470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:41.990491  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:42.009111  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:42.009130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:42.088409  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:42.088421  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:42.088433  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:42.192507  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:42.192534  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:44.727139  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:44.739542  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:44.739600  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:44.773501  404800 cri.go:89] found id: ""
	I1212 20:38:44.773515  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.773522  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:44.773527  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:44.773589  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:44.800128  404800 cri.go:89] found id: ""
	I1212 20:38:44.800142  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.800149  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:44.800154  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:44.800211  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:44.825549  404800 cri.go:89] found id: ""
	I1212 20:38:44.825563  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.825571  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:44.825576  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:44.825641  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:44.851616  404800 cri.go:89] found id: ""
	I1212 20:38:44.851630  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.851637  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:44.851642  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:44.851701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:44.877278  404800 cri.go:89] found id: ""
	I1212 20:38:44.877293  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.877300  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:44.877305  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:44.877365  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:44.905623  404800 cri.go:89] found id: ""
	I1212 20:38:44.905637  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.905644  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:44.905649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:44.905705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:44.931299  404800 cri.go:89] found id: ""
	I1212 20:38:44.931313  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.931319  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:44.931327  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:44.931338  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:44.998840  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:44.998865  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:45.080550  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:45.080572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:45.173764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:45.173775  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:45.173787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:45.264449  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:45.264506  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:47.816513  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:47.826919  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:47.826978  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:47.856068  404800 cri.go:89] found id: ""
	I1212 20:38:47.856083  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.856090  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:47.856095  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:47.856154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:47.883508  404800 cri.go:89] found id: ""
	I1212 20:38:47.883522  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.883529  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:47.883534  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:47.883595  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:47.909513  404800 cri.go:89] found id: ""
	I1212 20:38:47.909527  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.909534  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:47.909539  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:47.909617  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:47.939000  404800 cri.go:89] found id: ""
	I1212 20:38:47.939015  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.939022  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:47.939027  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:47.939084  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:47.965875  404800 cri.go:89] found id: ""
	I1212 20:38:47.965889  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.965897  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:47.965902  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:47.965975  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:47.992041  404800 cri.go:89] found id: ""
	I1212 20:38:47.992056  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.992063  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:47.992068  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:47.992127  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:48.022837  404800 cri.go:89] found id: ""
	I1212 20:38:48.022852  404800 logs.go:282] 0 containers: []
	W1212 20:38:48.022860  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:48.022867  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:48.022880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:48.039393  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:48.039410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:48.107317  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:48.107328  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:48.107340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:48.175841  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:48.175861  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:48.210572  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:48.210594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:50.783090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:50.796736  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:50.796840  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:50.825233  404800 cri.go:89] found id: ""
	I1212 20:38:50.825248  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.825255  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:50.825261  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:50.825319  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:50.852180  404800 cri.go:89] found id: ""
	I1212 20:38:50.852194  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.852201  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:50.852206  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:50.852262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:50.878747  404800 cri.go:89] found id: ""
	I1212 20:38:50.878763  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.878770  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:50.878775  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:50.878835  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:50.904522  404800 cri.go:89] found id: ""
	I1212 20:38:50.904536  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.904543  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:50.904548  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:50.904604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:50.931344  404800 cri.go:89] found id: ""
	I1212 20:38:50.931360  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.931367  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:50.931372  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:50.931428  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:50.957483  404800 cri.go:89] found id: ""
	I1212 20:38:50.957498  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.957505  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:50.957510  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:50.957568  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:50.982756  404800 cri.go:89] found id: ""
	I1212 20:38:50.982771  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.982778  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:50.982785  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:50.982796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:51.050968  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:51.050990  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:51.066537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:51.066556  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:51.139075  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:51.139089  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:51.139101  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:51.210713  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:51.210734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.744531  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:53.755115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:53.755176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:53.782428  404800 cri.go:89] found id: ""
	I1212 20:38:53.782443  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.782450  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:53.782455  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:53.782513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:53.809102  404800 cri.go:89] found id: ""
	I1212 20:38:53.809116  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.809123  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:53.809128  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:53.809188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:53.836479  404800 cri.go:89] found id: ""
	I1212 20:38:53.836492  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.836500  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:53.836505  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:53.836567  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:53.862110  404800 cri.go:89] found id: ""
	I1212 20:38:53.862124  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.862131  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:53.862136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:53.862193  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:53.888092  404800 cri.go:89] found id: ""
	I1212 20:38:53.888112  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.888119  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:53.888124  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:53.888188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:53.918381  404800 cri.go:89] found id: ""
	I1212 20:38:53.918412  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.918419  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:53.918425  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:53.918482  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:53.944685  404800 cri.go:89] found id: ""
	I1212 20:38:53.944700  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.944707  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:53.944715  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:53.944726  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.976361  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:53.976398  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:54.043617  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:54.043638  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:54.059716  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:54.059735  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:54.127525  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:54.127535  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:54.127550  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:56.697671  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:56.712906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:56.712987  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:56.745699  404800 cri.go:89] found id: ""
	I1212 20:38:56.745713  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.745721  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:56.745726  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:56.745780  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:56.774995  404800 cri.go:89] found id: ""
	I1212 20:38:56.775008  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.775015  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:56.775022  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:56.775076  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:56.801088  404800 cri.go:89] found id: ""
	I1212 20:38:56.801102  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.801109  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:56.801115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:56.801171  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:56.825939  404800 cri.go:89] found id: ""
	I1212 20:38:56.825953  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.825960  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:56.825965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:56.826020  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:56.851013  404800 cri.go:89] found id: ""
	I1212 20:38:56.851028  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.851035  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:56.851040  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:56.851099  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:56.875791  404800 cri.go:89] found id: ""
	I1212 20:38:56.875815  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.875823  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:56.875829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:56.875894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:56.902106  404800 cri.go:89] found id: ""
	I1212 20:38:56.902121  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.902128  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:56.902136  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:56.902146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:56.933095  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:56.933112  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:56.999748  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:56.999770  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:57.023866  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:57.023882  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:57.095113  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:57.095123  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:57.095133  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:59.665770  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:59.675717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:59.675792  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:59.701606  404800 cri.go:89] found id: ""
	I1212 20:38:59.701620  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.701626  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:59.701631  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:59.701688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:59.736582  404800 cri.go:89] found id: ""
	I1212 20:38:59.736597  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.736603  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:59.736609  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:59.736666  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:59.764566  404800 cri.go:89] found id: ""
	I1212 20:38:59.764588  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.764595  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:59.764602  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:59.764664  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:59.793759  404800 cri.go:89] found id: ""
	I1212 20:38:59.793774  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.793781  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:59.793786  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:59.793858  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:59.821810  404800 cri.go:89] found id: ""
	I1212 20:38:59.821824  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.821841  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:59.821846  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:59.821903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:59.851583  404800 cri.go:89] found id: ""
	I1212 20:38:59.851606  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.851614  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:59.851619  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:59.851688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:59.878726  404800 cri.go:89] found id: ""
	I1212 20:38:59.878740  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.878746  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:59.878754  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:59.878764  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:59.943708  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:59.943728  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:59.958686  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:59.958704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:00.056135  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:00.056146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:00.056159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:00.155066  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:00.155091  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:02.718200  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:02.729492  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:02.729550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:02.760544  404800 cri.go:89] found id: ""
	I1212 20:39:02.760559  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.760566  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:02.760571  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:02.760635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:02.792146  404800 cri.go:89] found id: ""
	I1212 20:39:02.792161  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.792174  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:02.792180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:02.792239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:02.818586  404800 cri.go:89] found id: ""
	I1212 20:39:02.818601  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.818609  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:02.818614  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:02.818678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:02.844172  404800 cri.go:89] found id: ""
	I1212 20:39:02.844187  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.844194  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:02.844199  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:02.844256  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:02.871047  404800 cri.go:89] found id: ""
	I1212 20:39:02.871061  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.871069  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:02.871074  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:02.871132  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:02.898048  404800 cri.go:89] found id: ""
	I1212 20:39:02.898062  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.898070  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:02.898075  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:02.898131  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:02.923194  404800 cri.go:89] found id: ""
	I1212 20:39:02.923209  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.923216  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:02.923224  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:02.923234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:02.988912  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:02.988932  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:03.004362  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:03.004410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:03.075259  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:03.075269  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:03.075280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:03.148856  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:03.148876  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:05.677035  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:05.686903  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:05.686961  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:05.722182  404800 cri.go:89] found id: ""
	I1212 20:39:05.722197  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.722204  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:05.722211  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:05.722309  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:05.756818  404800 cri.go:89] found id: ""
	I1212 20:39:05.756832  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.756839  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:05.756844  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:05.756946  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:05.785780  404800 cri.go:89] found id: ""
	I1212 20:39:05.785794  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.785801  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:05.785806  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:05.785862  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:05.816052  404800 cri.go:89] found id: ""
	I1212 20:39:05.816066  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.816073  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:05.816078  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:05.816134  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:05.841695  404800 cri.go:89] found id: ""
	I1212 20:39:05.841709  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.841716  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:05.841721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:05.841782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:05.868902  404800 cri.go:89] found id: ""
	I1212 20:39:05.868917  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.868924  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:05.868929  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:05.868998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:05.898574  404800 cri.go:89] found id: ""
	I1212 20:39:05.898589  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.898596  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:05.898603  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:05.898617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:05.966027  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:05.966048  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:05.980827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:05.980843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:06.048518  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:06.048528  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:06.048539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:06.118539  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:06.118566  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:08.648618  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:08.659086  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:08.659147  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:08.684568  404800 cri.go:89] found id: ""
	I1212 20:39:08.684583  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.684590  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:08.684595  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:08.684655  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:08.714848  404800 cri.go:89] found id: ""
	I1212 20:39:08.714862  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.714869  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:08.714873  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:08.714942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:08.749610  404800 cri.go:89] found id: ""
	I1212 20:39:08.749636  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.749643  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:08.749654  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:08.749720  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:08.780856  404800 cri.go:89] found id: ""
	I1212 20:39:08.780871  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.780878  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:08.780883  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:08.780943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:08.805202  404800 cri.go:89] found id: ""
	I1212 20:39:08.805216  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.805223  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:08.805228  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:08.805287  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:08.830301  404800 cri.go:89] found id: ""
	I1212 20:39:08.830317  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.830324  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:08.830329  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:08.830389  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:08.857083  404800 cri.go:89] found id: ""
	I1212 20:39:08.857098  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.857105  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:08.857113  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:08.857124  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:08.925442  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:08.925464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:08.940523  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:08.940539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:09.013233  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:09.013243  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:09.013254  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:09.085178  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:09.085198  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.613987  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:11.624006  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:11.624073  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:11.648868  404800 cri.go:89] found id: ""
	I1212 20:39:11.648883  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.648890  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:11.648902  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:11.648959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:11.673750  404800 cri.go:89] found id: ""
	I1212 20:39:11.673764  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.673771  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:11.673776  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:11.673837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:11.701310  404800 cri.go:89] found id: ""
	I1212 20:39:11.701324  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.701340  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:11.701347  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:11.701407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:11.728807  404800 cri.go:89] found id: ""
	I1212 20:39:11.728821  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.728828  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:11.728833  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:11.728898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:11.762671  404800 cri.go:89] found id: ""
	I1212 20:39:11.762706  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.762715  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:11.762720  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:11.762786  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:11.788450  404800 cri.go:89] found id: ""
	I1212 20:39:11.788481  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.788488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:11.788493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:11.788559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:11.816693  404800 cri.go:89] found id: ""
	I1212 20:39:11.816707  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.816714  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:11.816722  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:11.816732  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:11.886583  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:11.886593  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:11.886604  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:11.955026  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:11.955046  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.984471  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:11.984489  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:12.054196  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:12.054217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.569266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:14.579178  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:14.579234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:14.603297  404800 cri.go:89] found id: ""
	I1212 20:39:14.603312  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.603319  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:14.603324  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:14.603381  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:14.628304  404800 cri.go:89] found id: ""
	I1212 20:39:14.628318  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.628325  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:14.628330  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:14.628404  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:14.653112  404800 cri.go:89] found id: ""
	I1212 20:39:14.653126  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.653133  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:14.653138  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:14.653201  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:14.678048  404800 cri.go:89] found id: ""
	I1212 20:39:14.678063  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.678078  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:14.678083  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:14.678141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:14.710561  404800 cri.go:89] found id: ""
	I1212 20:39:14.710584  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.710592  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:14.710597  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:14.710662  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:14.744837  404800 cri.go:89] found id: ""
	I1212 20:39:14.744862  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.744870  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:14.744876  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:14.744943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:14.777906  404800 cri.go:89] found id: ""
	I1212 20:39:14.777920  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.777927  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:14.777936  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:14.777946  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:14.844303  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:14.844323  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.859158  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:14.859179  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:14.922392  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:14.922427  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:14.922438  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:14.992900  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:14.992920  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:17.545196  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:17.555712  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:17.555785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:17.582444  404800 cri.go:89] found id: ""
	I1212 20:39:17.582458  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.582465  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:17.582470  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:17.582527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:17.606892  404800 cri.go:89] found id: ""
	I1212 20:39:17.606906  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.606926  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:17.606932  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:17.606998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:17.631824  404800 cri.go:89] found id: ""
	I1212 20:39:17.631840  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.631846  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:17.631851  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:17.631906  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:17.658525  404800 cri.go:89] found id: ""
	I1212 20:39:17.658540  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.658548  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:17.658553  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:17.658610  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:17.687764  404800 cri.go:89] found id: ""
	I1212 20:39:17.687777  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.687784  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:17.687789  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:17.687844  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:17.720465  404800 cri.go:89] found id: ""
	I1212 20:39:17.720480  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.720488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:17.720493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:17.720561  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:17.758231  404800 cri.go:89] found id: ""
	I1212 20:39:17.758245  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.758261  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:17.758270  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:17.758281  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:17.838248  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:17.838280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:17.852734  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:17.852752  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:17.918178  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:17.918190  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:17.918202  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:17.985880  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:17.985901  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:20.529812  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:20.539894  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:20.539954  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:20.564821  404800 cri.go:89] found id: ""
	I1212 20:39:20.564834  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.564841  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:20.564846  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:20.564903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:20.594524  404800 cri.go:89] found id: ""
	I1212 20:39:20.594538  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.594544  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:20.594549  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:20.594606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:20.619997  404800 cri.go:89] found id: ""
	I1212 20:39:20.620011  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.620018  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:20.620023  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:20.620079  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:20.644542  404800 cri.go:89] found id: ""
	I1212 20:39:20.644557  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.644564  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:20.644569  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:20.644624  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:20.670273  404800 cri.go:89] found id: ""
	I1212 20:39:20.670289  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.670296  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:20.670302  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:20.670358  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:20.694691  404800 cri.go:89] found id: ""
	I1212 20:39:20.694705  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.694712  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:20.694717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:20.694771  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:20.739770  404800 cri.go:89] found id: ""
	I1212 20:39:20.739784  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.739791  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:20.739798  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:20.739809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:20.810407  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:20.810429  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:20.825194  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:20.825210  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:20.899009  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:20.899020  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:20.899032  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:20.977107  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:20.977129  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:23.510601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:23.521033  404800 kubeadm.go:602] duration metric: took 4m3.32729864s to restartPrimaryControlPlane
	W1212 20:39:23.521093  404800 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:39:23.521166  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:39:23.936973  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:39:23.949604  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:39:23.957638  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:39:23.957691  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:39:23.965470  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:39:23.965481  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:39:23.965536  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:39:23.973241  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:39:23.973300  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:39:23.980875  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:39:23.989722  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:39:23.989777  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:39:23.997778  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.007027  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:39:24.007112  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.016721  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:39:24.025622  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:39:24.025690  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:39:24.034033  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:39:24.077877  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:39:24.079077  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:39:24.152874  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:39:24.152937  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:39:24.152972  404800 kubeadm.go:319] OS: Linux
	I1212 20:39:24.153034  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:39:24.153081  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:39:24.153126  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:39:24.153178  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:39:24.153225  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:39:24.153271  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:39:24.153314  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:39:24.153363  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:39:24.153407  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:39:24.219483  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:39:24.219589  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:39:24.219678  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:39:24.228954  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:39:24.234481  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:39:24.234574  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:39:24.234638  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:39:24.234713  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:39:24.234772  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:39:24.234841  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:39:24.234896  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:39:24.234958  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:39:24.235017  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:39:24.235090  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:39:24.235172  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:39:24.235208  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:39:24.235263  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:39:24.294876  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:39:24.534877  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:39:24.632916  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:39:24.763704  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:39:25.183116  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:39:25.183864  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:39:25.186637  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:39:25.190125  404800 out.go:252]   - Booting up control plane ...
	I1212 20:39:25.190229  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:39:25.190325  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:39:25.190412  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:39:25.205322  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:39:25.205427  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:39:25.215814  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:39:25.216163  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:39:25.216236  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:39:25.353073  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:39:25.353188  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:43:25.353162  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000280513s
	I1212 20:43:25.353205  404800 kubeadm.go:319] 
	I1212 20:43:25.353282  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:43:25.353332  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:43:25.353453  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:43:25.353461  404800 kubeadm.go:319] 
	I1212 20:43:25.353609  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:43:25.353657  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:43:25.353688  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:43:25.353691  404800 kubeadm.go:319] 
	I1212 20:43:25.359119  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:43:25.359579  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:43:25.359715  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:43:25.360004  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:43:25.360010  404800 kubeadm.go:319] 
	I1212 20:43:25.360149  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:43:25.360245  404800 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000280513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:43:25.360353  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:43:25.770646  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:43:25.783563  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:43:25.783624  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:43:25.791806  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:43:25.791814  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:43:25.791862  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:43:25.799745  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:43:25.799799  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:43:25.807302  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:43:25.815035  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:43:25.815084  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:43:25.822960  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.831068  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:43:25.831122  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.838463  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:43:25.846379  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:43:25.846433  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:43:25.853821  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:43:25.894714  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:43:25.895009  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:43:25.961164  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:43:25.961230  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:43:25.961265  404800 kubeadm.go:319] OS: Linux
	I1212 20:43:25.961309  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:43:25.961355  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:43:25.961404  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:43:25.961451  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:43:25.961498  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:43:25.961544  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:43:25.961587  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:43:25.961634  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:43:25.961678  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:43:26.029509  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:43:26.029612  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:43:26.029701  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:43:26.038278  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:43:26.041933  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:43:26.042043  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:43:26.042118  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:43:26.042200  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:43:26.042265  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:43:26.042338  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:43:26.042395  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:43:26.042462  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:43:26.042527  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:43:26.042606  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:43:26.042683  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:43:26.042722  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:43:26.042781  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:43:26.129341  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:43:26.328670  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:43:26.553215  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:43:26.647700  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:43:26.895572  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:43:26.896106  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:43:26.898924  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:43:26.902076  404800 out.go:252]   - Booting up control plane ...
	I1212 20:43:26.902180  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:43:26.902266  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:43:26.902331  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:43:26.916276  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:43:26.916395  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:43:26.923968  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:43:26.925348  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:43:26.925393  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:43:27.058187  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:43:27.058300  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:47:27.059387  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001189054s
	I1212 20:47:27.059415  404800 kubeadm.go:319] 
	I1212 20:47:27.059512  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:47:27.059567  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:47:27.059889  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:47:27.059895  404800 kubeadm.go:319] 
	I1212 20:47:27.060100  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:47:27.060426  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:47:27.060479  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:47:27.060483  404800 kubeadm.go:319] 
	I1212 20:47:27.064619  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:47:27.065062  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:47:27.065168  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:47:27.065401  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:47:27.065405  404800 kubeadm.go:319] 
	I1212 20:47:27.065522  404800 kubeadm.go:403] duration metric: took 12m6.90957682s to StartCluster
	I1212 20:47:27.065550  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:47:27.065606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:47:27.065669  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:47:27.091473  404800 cri.go:89] found id: ""
	I1212 20:47:27.091488  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.091495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:47:27.091500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:47:27.091559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:47:27.118055  404800 cri.go:89] found id: ""
	I1212 20:47:27.118069  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.118076  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:47:27.118081  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:47:27.118141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:47:27.144553  404800 cri.go:89] found id: ""
	I1212 20:47:27.144567  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.144574  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:47:27.144579  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:47:27.144636  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:47:27.170138  404800 cri.go:89] found id: ""
	I1212 20:47:27.170152  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.170172  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:47:27.170177  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:47:27.170242  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:47:27.199222  404800 cri.go:89] found id: ""
	I1212 20:47:27.199236  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.199243  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:47:27.199248  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:47:27.199305  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:47:27.225906  404800 cri.go:89] found id: ""
	I1212 20:47:27.225921  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.225929  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:47:27.225934  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:47:27.225993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:47:27.251774  404800 cri.go:89] found id: ""
	I1212 20:47:27.251788  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.251795  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:47:27.251803  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:47:27.251843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:47:27.318965  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:47:27.318984  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:47:27.336153  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:47:27.336169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:47:27.403235  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:47:27.403245  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:47:27.403256  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:47:27.475348  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:47:27.475369  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 20:47:27.504551  404800 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:47:27.504592  404800 out.go:285] * 
	W1212 20:47:27.504699  404800 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.504759  404800 out.go:285] * 
	W1212 20:47:27.507341  404800 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:47:27.514164  404800 out.go:203] 
	W1212 20:47:27.517009  404800 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.517056  404800 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:47:27.517078  404800 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:47:27.520151  404800 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617557022Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617594914Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617644933Z" level=info msg="Create NRI interface"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617744979Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617956402Z" level=info msg="runtime interface created"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617981551Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617990003Z" level=info msg="runtime interface starting up..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618002294Z" level=info msg="starting plugins..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618017146Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618092166Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:35:18 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223066755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=efc21d87-a1b0-4de5-a48b-a3e014a5db32 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223827337Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e9bb6f76-9bf0-445e-a911-5989a7f224b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224384709Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=eb32b7e0-d164-45f4-be96-6799b271663a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224808771Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=192a05d5-754c-4620-9a7e-630a23b2f5d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225240365Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d03d55da-4587-4eea-8a9a-e52381826a03 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225676677Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c7d002dd-9552-4715-b7be-2078da811840 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.226165084Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=daf96e40-8252-45d3-a005-ea53669f5cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.033616408Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2a067e1-2c90-429c-b592-c0026a728c8d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.0344028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=faa7c5c2-57de-45d0-98b9-b1fc40b3897e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.034956632Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=86aed69e-89fa-4789-b7e0-66c21b53b655 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.035606867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e8b67148-2c8a-4d5b-8bc5-9c052262c589 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036159986Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56696416-3d0f-4c77-8dbb-77790563b13a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036707976Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ba267d45-19b4-448f-9f92-2993fe38692a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.037209312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=301781f2-4844-424c-a8ec-9528bb0007ad name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:47:31.051236   21346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:31.051857   21346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:31.053469   21346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:31.054055   21346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:31.055783   21346 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:47:31 up  3:30,  0 user,  load average: 0.10, 0.16, 0.52
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:47:28 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:47:29 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 964.
	Dec 12 20:47:29 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:29 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:29 functional-261311 kubelet[21222]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:29 functional-261311 kubelet[21222]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:29 functional-261311 kubelet[21222]: E1212 20:47:29.282526   21222 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:47:29 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:47:29 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:47:29 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 965.
	Dec 12 20:47:29 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:29 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:30 functional-261311 kubelet[21248]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:30 functional-261311 kubelet[21248]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:30 functional-261311 kubelet[21248]: E1212 20:47:30.031617   21248 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:47:30 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:47:30 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:47:30 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 12 20:47:30 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:30 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:47:30 functional-261311 kubelet[21271]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:30 functional-261311 kubelet[21271]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:47:30 functional-261311 kubelet[21271]: E1212 20:47:30.774997   21271 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:47:30 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:47:30 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (348.626019ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (2.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-261311 apply -f testdata/invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-261311 apply -f testdata/invalidsvc.yaml: exit status 1 (57.283892ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/invalidsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-261311 apply -f testdata/invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-261311 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-261311 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-261311 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-261311 --alsologtostderr -v=1] stderr:
I1212 20:49:35.764803  422131 out.go:360] Setting OutFile to fd 1 ...
I1212 20:49:35.764988  422131 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:35.765000  422131 out.go:374] Setting ErrFile to fd 2...
I1212 20:49:35.765005  422131 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:35.765261  422131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:49:35.765521  422131 mustload.go:66] Loading cluster: functional-261311
I1212 20:49:35.765963  422131 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:35.766486  422131 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:49:35.783085  422131 host.go:66] Checking if "functional-261311" exists ...
I1212 20:49:35.783400  422131 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 20:49:35.835878  422131 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.82661643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:
/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 20:49:35.836014  422131 api_server.go:166] Checking apiserver status ...
I1212 20:49:35.836087  422131 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 20:49:35.836136  422131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:49:35.853890  422131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
W1212 20:49:35.962968  422131 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 20:49:35.966178  422131 out.go:179] * The control-plane node functional-261311 apiserver is not running: (state=Stopped)
I1212 20:49:35.969084  422131 out.go:179]   To start a cluster, run: "minikube start -p functional-261311"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (330.713754ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
E1212 20:49:36.832462  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service   │ functional-261311 service hello-node --url                                                                                                         │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount     │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001:/mount-9p --alsologtostderr -v=1             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh -- ls -la /mount-9p                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh cat /mount-9p/test-1765572565314390121                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh sudo umount -f /mount-9p                                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount     │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo996352420/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh -- ls -la /mount-9p                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh sudo umount -f /mount-9p                                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount     │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount1 --alsologtostderr -v=1               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh findmnt -T /mount1                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount     │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount2 --alsologtostderr -v=1               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount     │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount3 --alsologtostderr -v=1               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh findmnt -T /mount1                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh findmnt -T /mount2                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh findmnt -T /mount3                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ mount     │ -p functional-261311 --kill=true                                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ start     │ -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0      │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ start     │ -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0      │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ start     │ -p functional-261311 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-261311 --alsologtostderr -v=1                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	└───────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:49:35
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:49:35.533502  422059 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:49:35.533654  422059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.533680  422059 out.go:374] Setting ErrFile to fd 2...
	I1212 20:49:35.533686  422059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.533997  422059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:49:35.534386  422059 out.go:368] Setting JSON to false
	I1212 20:49:35.535259  422059 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12728,"bootTime":1765559848,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:49:35.535328  422059 start.go:143] virtualization:  
	I1212 20:49:35.538650  422059 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:49:35.541685  422059 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:49:35.541766  422059 notify.go:221] Checking for updates...
	I1212 20:49:35.547510  422059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:49:35.550302  422059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:49:35.553198  422059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:49:35.556172  422059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:49:35.559136  422059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:49:35.562577  422059 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:49:35.563232  422059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:49:35.589863  422059 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:49:35.589981  422059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.646483  422059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.637420895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.646591  422059 docker.go:319] overlay module found
	I1212 20:49:35.649676  422059 out.go:179] * Using the docker driver based on existing profile
	I1212 20:49:35.652473  422059 start.go:309] selected driver: docker
	I1212 20:49:35.652493  422059 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.652603  422059 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:49:35.652719  422059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.709556  422059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.699409249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.710002  422059 cni.go:84] Creating CNI manager for ""
	I1212 20:49:35.710068  422059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:49:35.710110  422059 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.713406  422059 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617557022Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617594914Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617644933Z" level=info msg="Create NRI interface"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617744979Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617956402Z" level=info msg="runtime interface created"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617981551Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617990003Z" level=info msg="runtime interface starting up..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618002294Z" level=info msg="starting plugins..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618017146Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618092166Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:35:18 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223066755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=efc21d87-a1b0-4de5-a48b-a3e014a5db32 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223827337Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e9bb6f76-9bf0-445e-a911-5989a7f224b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224384709Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=eb32b7e0-d164-45f4-be96-6799b271663a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224808771Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=192a05d5-754c-4620-9a7e-630a23b2f5d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225240365Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d03d55da-4587-4eea-8a9a-e52381826a03 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225676677Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c7d002dd-9552-4715-b7be-2078da811840 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.226165084Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=daf96e40-8252-45d3-a005-ea53669f5cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.033616408Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2a067e1-2c90-429c-b592-c0026a728c8d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.0344028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=faa7c5c2-57de-45d0-98b9-b1fc40b3897e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.034956632Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=86aed69e-89fa-4789-b7e0-66c21b53b655 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.035606867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e8b67148-2c8a-4d5b-8bc5-9c052262c589 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036159986Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56696416-3d0f-4c77-8dbb-77790563b13a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036707976Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ba267d45-19b4-448f-9f92-2993fe38692a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.037209312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=301781f2-4844-424c-a8ec-9528bb0007ad name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:49:37.047313   23432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:37.048151   23432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:37.049728   23432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:37.050022   23432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:37.051510   23432 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:49:37 up  3:32,  0 user,  load average: 0.53, 0.30, 0.52
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:49:34 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:35 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1132.
	Dec 12 20:49:35 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:35 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:35 functional-261311 kubelet[23310]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:35 functional-261311 kubelet[23310]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:35 functional-261311 kubelet[23310]: E1212 20:49:35.259755   23310 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:35 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:35 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:35 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1133.
	Dec 12 20:49:35 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:35 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:36 functional-261311 kubelet[23324]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:36 functional-261311 kubelet[23324]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:36 functional-261311 kubelet[23324]: E1212 20:49:36.025446   23324 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:36 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:36 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:36 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1134.
	Dec 12 20:49:36 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:36 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:36 functional-261311 kubelet[23354]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:36 functional-261311 kubelet[23354]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:36 functional-261311 kubelet[23354]: E1212 20:49:36.772837   23354 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:36 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:36 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (304.569558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 status: exit status 2 (345.498314ms)

                                                
                                                
-- stdout --
	functional-261311
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-linux-arm64 -p functional-261311 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (319.915097ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-linux-arm64 -p functional-261311 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 status -o json: exit status 2 (324.445428ms)

                                                
                                                
-- stdout --
	{"Name":"functional-261311","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-linux-arm64 -p functional-261311 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (347.812302ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                        ARGS                                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-261311 service list                                                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ service │ functional-261311 service list -o json                                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ service │ functional-261311 service --namespace=default --https --url hello-node                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ service │ functional-261311 service hello-node --url --format={{.IP}}                                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ service │ functional-261311 service hello-node --url                                                                                                         │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount   │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001:/mount-9p --alsologtostderr -v=1             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh     │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh     │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh -- ls -la /mount-9p                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh cat /mount-9p/test-1765572565314390121                                                                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh     │ functional-261311 ssh sudo umount -f /mount-9p                                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount   │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo996352420/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh     │ functional-261311 ssh findmnt -T /mount-9p | grep 9p                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh -- ls -la /mount-9p                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh sudo umount -f /mount-9p                                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount   │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount1 --alsologtostderr -v=1               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh     │ functional-261311 ssh findmnt -T /mount1                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount   │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount2 --alsologtostderr -v=1               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount   │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount3 --alsologtostderr -v=1               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh     │ functional-261311 ssh findmnt -T /mount1                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh findmnt -T /mount2                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh     │ functional-261311 ssh findmnt -T /mount3                                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ mount   │ -p functional-261311 --kill=true                                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:35:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:35:15.460416  404800 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:35:15.460537  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.460541  404800 out.go:374] Setting ErrFile to fd 2...
	I1212 20:35:15.460545  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.461281  404800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:35:15.461704  404800 out.go:368] Setting JSON to false
	I1212 20:35:15.462524  404800 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11868,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:35:15.462588  404800 start.go:143] virtualization:  
	I1212 20:35:15.465993  404800 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:35:15.469163  404800 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:35:15.469272  404800 notify.go:221] Checking for updates...
	I1212 20:35:15.475214  404800 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:35:15.478288  404800 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:35:15.481030  404800 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:35:15.483916  404800 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:35:15.486846  404800 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:35:15.490383  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:15.490523  404800 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:35:15.521733  404800 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:35:15.521840  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.586834  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.575092276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.586929  404800 docker.go:319] overlay module found
	I1212 20:35:15.590005  404800 out.go:179] * Using the docker driver based on existing profile
	I1212 20:35:15.592944  404800 start.go:309] selected driver: docker
	I1212 20:35:15.592962  404800 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.593077  404800 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:35:15.593201  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.653530  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.644295166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.653919  404800 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:35:15.653944  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:15.653992  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:15.654035  404800 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.657113  404800 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:35:15.659873  404800 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:35:15.662874  404800 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:35:15.665759  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:15.665839  404800 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:35:15.665900  404800 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:35:15.665919  404800 cache.go:65] Caching tarball of preloaded images
	I1212 20:35:15.666041  404800 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:35:15.666050  404800 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:35:15.666202  404800 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:35:15.685367  404800 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:35:15.685378  404800 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:35:15.685400  404800 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:35:15.685432  404800 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:35:15.685502  404800 start.go:364] duration metric: took 54.475µs to acquireMachinesLock for "functional-261311"
	I1212 20:35:15.685521  404800 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:35:15.685526  404800 fix.go:54] fixHost starting: 
	I1212 20:35:15.685789  404800 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:35:15.703273  404800 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:35:15.703293  404800 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:35:15.712450  404800 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:35:15.712481  404800 machine.go:94] provisionDockerMachine start ...
	I1212 20:35:15.712578  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.736656  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.736977  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.736984  404800 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:35:15.891915  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:15.891929  404800 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:35:15.891999  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.910460  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.910779  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.910787  404800 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:35:16.077690  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:16.077778  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.097025  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.097341  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.097354  404800 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:35:16.252758  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:35:16.252773  404800 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:35:16.252793  404800 ubuntu.go:190] setting up certificates
	I1212 20:35:16.252801  404800 provision.go:84] configureAuth start
	I1212 20:35:16.252918  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:16.270682  404800 provision.go:143] copyHostCerts
	I1212 20:35:16.270755  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:35:16.270763  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:35:16.270834  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:35:16.270926  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:35:16.270930  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:35:16.270953  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:35:16.271010  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:35:16.271014  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:35:16.271036  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:35:16.271079  404800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:35:16.466046  404800 provision.go:177] copyRemoteCerts
	I1212 20:35:16.466103  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:35:16.466141  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.490439  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:16.596331  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:35:16.614499  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:35:16.632168  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:35:16.649948  404800 provision.go:87] duration metric: took 397.124655ms to configureAuth
	I1212 20:35:16.649967  404800 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:35:16.650174  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:16.650275  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.667262  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.667562  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.667574  404800 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:35:17.020390  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:35:17.020403  404800 machine.go:97] duration metric: took 1.307915361s to provisionDockerMachine
	I1212 20:35:17.020413  404800 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:35:17.020431  404800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:35:17.020498  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:35:17.020542  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.039179  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.144817  404800 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:35:17.148499  404800 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:35:17.148517  404800 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:35:17.148528  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:35:17.148587  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:35:17.148671  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:35:17.148745  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:35:17.148790  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:35:17.156874  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:17.175633  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:35:17.193693  404800 start.go:296] duration metric: took 173.265259ms for postStartSetup
	I1212 20:35:17.193768  404800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:35:17.193829  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.212738  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.326054  404800 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:35:17.331128  404800 fix.go:56] duration metric: took 1.64559363s for fixHost
	I1212 20:35:17.331145  404800 start.go:83] releasing machines lock for "functional-261311", held for 1.645635346s
	I1212 20:35:17.331211  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:17.348942  404800 ssh_runner.go:195] Run: cat /version.json
	I1212 20:35:17.348993  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.349240  404800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:35:17.349288  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.377660  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.380423  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.480436  404800 ssh_runner.go:195] Run: systemctl --version
	I1212 20:35:17.572826  404800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:35:17.610243  404800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:35:17.614893  404800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:35:17.614954  404800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:35:17.623289  404800 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:35:17.623303  404800 start.go:496] detecting cgroup driver to use...
	I1212 20:35:17.623333  404800 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:35:17.623377  404800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:35:17.638845  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:35:17.652624  404800 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:35:17.652690  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:35:17.668971  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:35:17.682562  404800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:35:17.807109  404800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:35:17.921667  404800 docker.go:234] disabling docker service ...
	I1212 20:35:17.921741  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:35:17.940321  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:35:17.957092  404800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:35:18.087741  404800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:35:18.206163  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:35:18.219734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:35:18.233813  404800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:35:18.233881  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.242826  404800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:35:18.242900  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.252023  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.261290  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.270163  404800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:35:18.278452  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.287612  404800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.296129  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.305360  404800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:35:18.313008  404800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:35:18.320507  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:18.433496  404800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:35:18.624476  404800 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:35:18.624545  404800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:35:18.628455  404800 start.go:564] Will wait 60s for crictl version
	I1212 20:35:18.628509  404800 ssh_runner.go:195] Run: which crictl
	I1212 20:35:18.631901  404800 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:35:18.657967  404800 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:35:18.658043  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.686054  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.728907  404800 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:35:18.731836  404800 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:35:18.758101  404800 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:35:18.765430  404800 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:35:18.768359  404800 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:35:18.768498  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:18.768569  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.809159  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.809172  404800 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:35:18.809226  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.835786  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.835798  404800 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:35:18.835804  404800 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:35:18.835897  404800 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:35:18.835978  404800 ssh_runner.go:195] Run: crio config
	I1212 20:35:18.911975  404800 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:35:18.911996  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:18.912005  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:18.912021  404800 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:35:18.912048  404800 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:35:18.912174  404800 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:35:18.912242  404800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:35:18.919878  404800 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:35:18.919945  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:35:18.927506  404800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:35:18.940260  404800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:35:18.953546  404800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 20:35:18.966154  404800 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:35:18.969878  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:19.088694  404800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:35:19.456785  404800 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:35:19.456797  404800 certs.go:195] generating shared ca certs ...
	I1212 20:35:19.456811  404800 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:35:19.457015  404800 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:35:19.457061  404800 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:35:19.457083  404800 certs.go:257] generating profile certs ...
	I1212 20:35:19.457188  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:35:19.457266  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:35:19.457320  404800 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:35:19.457484  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:35:19.457522  404800 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:35:19.457530  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:35:19.457572  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:35:19.457613  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:35:19.457656  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:35:19.457720  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:19.458537  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:35:19.481387  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:35:19.503914  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:35:19.527911  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:35:19.547817  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:35:19.567001  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:35:19.585411  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:35:19.603199  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:35:19.621415  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:35:19.639746  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:35:19.657747  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:35:19.675414  404800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:35:19.688797  404800 ssh_runner.go:195] Run: openssl version
	I1212 20:35:19.695324  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.703181  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:35:19.710800  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714682  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714738  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.755943  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:35:19.764525  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.772260  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:35:19.780093  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783725  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783778  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.825039  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:35:19.832411  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.839917  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:35:19.847683  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851494  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851551  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.892840  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:35:19.900611  404800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:35:19.904415  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:35:19.945816  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:35:19.987206  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:35:20.028949  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:35:20.071640  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:35:20.114011  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:35:20.155956  404800 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:20.156040  404800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:35:20.156106  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.185271  404800 cri.go:89] found id: ""
	I1212 20:35:20.185335  404800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:35:20.193716  404800 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:35:20.193726  404800 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:35:20.193778  404800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:35:20.201404  404800 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.201928  404800 kubeconfig.go:125] found "functional-261311" server: "https://192.168.49.2:8441"
	I1212 20:35:20.203285  404800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:35:20.213068  404800 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 20:20:42.746943766 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:35:18.963900938 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:35:20.213088  404800 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:35:20.213099  404800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 20:35:20.213154  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.242899  404800 cri.go:89] found id: ""
	I1212 20:35:20.242960  404800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:35:20.261588  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:35:20.270004  404800 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 20:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 20:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 20:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 20:24 /etc/kubernetes/scheduler.conf
	
	I1212 20:35:20.270062  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:35:20.278110  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:35:20.285789  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.285844  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:35:20.293376  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.301132  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.301185  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.309065  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:35:20.316914  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.316967  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:35:20.324673  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:35:20.332520  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:20.381164  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.740495  404800 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.359307117s)
	I1212 20:35:21.740554  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.936349  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.006437  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.060809  404800 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:35:22.060899  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:22.561081  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.062037  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.561673  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.061283  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.561690  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.061084  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.561740  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.061753  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.561615  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.061476  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.561193  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.061088  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.561754  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.061218  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.561124  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.061364  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.561503  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.061616  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.561042  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.061002  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.561635  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.561100  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.061640  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.562032  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.061030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.561966  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.061881  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.561895  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.061604  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.062060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.061118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.561000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.061043  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.561911  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.061748  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.561627  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.561174  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.061190  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.561060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.061057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.561587  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.561122  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.061055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.561141  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.061107  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.560994  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.062000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.561057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.061151  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.561089  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.061007  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.561745  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.061094  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.561413  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.061652  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.561706  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.061685  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.561118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.061047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.561109  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.061626  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.561543  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.061374  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.561047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.062047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.561053  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.061760  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.561015  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.561602  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.061050  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.565101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.061738  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.561016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.061584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.561705  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.062021  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.561146  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.061266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.061786  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.561910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.062016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.561621  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.061104  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.561077  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.061034  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.561076  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.061095  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.062030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.561403  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.061217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.561772  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.061561  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.561252  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.061001  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.561813  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.061556  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.061061  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.561415  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.061155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.061682  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.561217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.061108  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.561055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.061653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.561105  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.061064  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.561836  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.061167  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.561650  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:22.061836  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:22.061921  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:22.088621  404800 cri.go:89] found id: ""
	I1212 20:36:22.088636  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.088643  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:22.088648  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:22.088710  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:22.115845  404800 cri.go:89] found id: ""
	I1212 20:36:22.115860  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.115867  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:22.115872  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:22.115934  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:22.145607  404800 cri.go:89] found id: ""
	I1212 20:36:22.145622  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.145629  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:22.145634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:22.145694  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:22.175762  404800 cri.go:89] found id: ""
	I1212 20:36:22.175782  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.175790  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:22.175795  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:22.175852  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:22.205262  404800 cri.go:89] found id: ""
	I1212 20:36:22.205277  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.205283  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:22.205288  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:22.205343  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:22.240968  404800 cri.go:89] found id: ""
	I1212 20:36:22.240981  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.240988  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:22.240993  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:22.241050  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:22.272662  404800 cri.go:89] found id: ""
	I1212 20:36:22.272676  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.272683  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:22.272691  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:22.272700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:22.301824  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:22.301841  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:22.370470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:22.370488  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:22.385289  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:22.385306  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:22.449648  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:22.449659  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:22.449670  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.019320  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:25.030277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:25.030345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:25.060950  404800 cri.go:89] found id: ""
	I1212 20:36:25.060975  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.060982  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:25.060988  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:25.061049  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:25.087641  404800 cri.go:89] found id: ""
	I1212 20:36:25.087663  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.087670  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:25.087675  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:25.087735  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:25.114870  404800 cri.go:89] found id: ""
	I1212 20:36:25.114885  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.114893  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:25.114899  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:25.114963  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:25.140642  404800 cri.go:89] found id: ""
	I1212 20:36:25.140664  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.140671  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:25.140677  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:25.140736  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:25.166644  404800 cri.go:89] found id: ""
	I1212 20:36:25.166658  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.166665  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:25.166671  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:25.166731  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:25.192547  404800 cri.go:89] found id: ""
	I1212 20:36:25.192561  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.192567  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:25.192572  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:25.192635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:25.231874  404800 cri.go:89] found id: ""
	I1212 20:36:25.231889  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.231895  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:25.231903  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:25.231914  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:25.315537  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:25.315559  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:25.330635  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:25.330654  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:25.395220  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:25.395260  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:25.395272  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.467585  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:25.467605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:27.999765  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:28.012318  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:28.012406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:28.038452  404800 cri.go:89] found id: ""
	I1212 20:36:28.038467  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.038475  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:28.038481  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:28.038550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:28.065565  404800 cri.go:89] found id: ""
	I1212 20:36:28.065579  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.065586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:28.065591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:28.065652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:28.091553  404800 cri.go:89] found id: ""
	I1212 20:36:28.091574  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.091581  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:28.091587  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:28.091651  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:28.117664  404800 cri.go:89] found id: ""
	I1212 20:36:28.117677  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.117684  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:28.117689  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:28.117747  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:28.143314  404800 cri.go:89] found id: ""
	I1212 20:36:28.143328  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.143335  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:28.143339  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:28.143396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:28.170365  404800 cri.go:89] found id: ""
	I1212 20:36:28.170379  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.170386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:28.170391  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:28.170450  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:28.194993  404800 cri.go:89] found id: ""
	I1212 20:36:28.195013  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.195019  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:28.195027  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:28.195037  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:28.264144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:28.264163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:28.294480  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:28.294497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:28.364064  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:28.364087  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:28.378788  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:28.378811  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:28.443238  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:30.944182  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:30.954580  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:30.954652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:30.981452  404800 cri.go:89] found id: ""
	I1212 20:36:30.981467  404800 logs.go:282] 0 containers: []
	W1212 20:36:30.981474  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:30.981479  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:30.981543  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:31.009852  404800 cri.go:89] found id: ""
	I1212 20:36:31.009868  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.009875  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:31.009881  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:31.009949  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:31.041648  404800 cri.go:89] found id: ""
	I1212 20:36:31.041664  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.041671  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:31.041676  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:31.041741  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:31.071159  404800 cri.go:89] found id: ""
	I1212 20:36:31.071194  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.071203  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:31.071208  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:31.071274  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:31.101318  404800 cri.go:89] found id: ""
	I1212 20:36:31.101333  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.101340  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:31.101345  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:31.101407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:31.128905  404800 cri.go:89] found id: ""
	I1212 20:36:31.128921  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.128937  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:31.128943  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:31.129019  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:31.156884  404800 cri.go:89] found id: ""
	I1212 20:36:31.156899  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.156906  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:31.156914  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:31.156924  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:31.229169  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:31.229188  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:31.244638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:31.244655  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:31.316835  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:31.316848  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:31.316866  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:31.386236  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:31.386258  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:33.917579  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:33.927716  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:33.927782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:33.952915  404800 cri.go:89] found id: ""
	I1212 20:36:33.952929  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.952936  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:33.952941  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:33.952998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:33.986667  404800 cri.go:89] found id: ""
	I1212 20:36:33.986681  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.986688  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:33.986693  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:33.986753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:34.017351  404800 cri.go:89] found id: ""
	I1212 20:36:34.017367  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.017374  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:34.017379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:34.017459  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:34.044495  404800 cri.go:89] found id: ""
	I1212 20:36:34.044509  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.044517  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:34.044522  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:34.044579  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:34.070939  404800 cri.go:89] found id: ""
	I1212 20:36:34.070953  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.070960  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:34.070964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:34.071022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:34.099384  404800 cri.go:89] found id: ""
	I1212 20:36:34.099398  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.099405  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:34.099411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:34.099469  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:34.125342  404800 cri.go:89] found id: ""
	I1212 20:36:34.125357  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.125364  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:34.125372  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:34.125383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:34.195370  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:34.195391  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:34.212114  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:34.212130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:34.294767  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:34.294788  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:34.294798  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:34.365333  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:34.365354  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:36.899244  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:36.909418  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:36.909481  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:36.934188  404800 cri.go:89] found id: ""
	I1212 20:36:36.934202  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.934219  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:36.934224  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:36.934281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:36.959806  404800 cri.go:89] found id: ""
	I1212 20:36:36.959821  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.959828  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:36.959832  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:36.959898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:36.986148  404800 cri.go:89] found id: ""
	I1212 20:36:36.986162  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.986169  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:36.986174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:36.986231  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:37.017876  404800 cri.go:89] found id: ""
	I1212 20:36:37.017892  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.017899  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:37.017905  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:37.017971  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:37.047901  404800 cri.go:89] found id: ""
	I1212 20:36:37.047915  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.047921  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:37.047926  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:37.047985  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:37.076531  404800 cri.go:89] found id: ""
	I1212 20:36:37.076546  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.076553  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:37.076558  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:37.076615  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:37.102846  404800 cri.go:89] found id: ""
	I1212 20:36:37.102870  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.102877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:37.102885  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:37.102896  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:37.134007  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:37.134024  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:37.207327  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:37.207352  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:37.222638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:37.222657  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:37.290385  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:37.290395  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:37.290406  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:39.860964  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:39.871500  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:39.871558  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:39.898740  404800 cri.go:89] found id: ""
	I1212 20:36:39.898755  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.898762  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:39.898767  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:39.898830  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:39.925154  404800 cri.go:89] found id: ""
	I1212 20:36:39.925168  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.925175  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:39.925180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:39.925239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:39.950208  404800 cri.go:89] found id: ""
	I1212 20:36:39.950223  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.950229  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:39.950234  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:39.950297  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:39.976836  404800 cri.go:89] found id: ""
	I1212 20:36:39.976851  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.976857  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:39.976863  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:39.976936  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:40.009665  404800 cri.go:89] found id: ""
	I1212 20:36:40.009695  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.010153  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:40.010168  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:40.010262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:40.067797  404800 cri.go:89] found id: ""
	I1212 20:36:40.067813  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.067838  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:40.067844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:40.067922  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:40.103262  404800 cri.go:89] found id: ""
	I1212 20:36:40.103277  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.103287  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:40.103295  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:40.103308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:40.119554  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:40.119573  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:40.195337  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:40.195364  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:40.195376  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:40.270010  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:40.270029  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:40.299631  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:40.299652  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:42.866117  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:42.876408  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:42.876467  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:42.901308  404800 cri.go:89] found id: ""
	I1212 20:36:42.901321  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.901328  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:42.901333  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:42.901396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:42.925954  404800 cri.go:89] found id: ""
	I1212 20:36:42.925968  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.925975  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:42.925980  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:42.926041  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:42.951209  404800 cri.go:89] found id: ""
	I1212 20:36:42.951224  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.951231  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:42.951236  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:42.951296  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:42.977995  404800 cri.go:89] found id: ""
	I1212 20:36:42.978010  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.978017  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:42.978022  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:42.978082  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:43.004860  404800 cri.go:89] found id: ""
	I1212 20:36:43.004875  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.004892  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:43.004898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:43.004973  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:43.040400  404800 cri.go:89] found id: ""
	I1212 20:36:43.040414  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.040421  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:43.040427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:43.040485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:43.068090  404800 cri.go:89] found id: ""
	I1212 20:36:43.068104  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.068122  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:43.068130  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:43.068144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:43.140175  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:43.140195  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:43.154957  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:43.154976  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:43.225443  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:43.225462  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:43.225473  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:43.307152  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:43.307175  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:45.837432  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:45.847721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:45.847783  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:45.874064  404800 cri.go:89] found id: ""
	I1212 20:36:45.874118  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.874125  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:45.874131  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:45.874197  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:45.902655  404800 cri.go:89] found id: ""
	I1212 20:36:45.902669  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.902676  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:45.902681  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:45.902739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:45.929017  404800 cri.go:89] found id: ""
	I1212 20:36:45.929031  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.929044  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:45.929050  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:45.929118  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:45.958749  404800 cri.go:89] found id: ""
	I1212 20:36:45.958763  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.958770  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:45.958776  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:45.958837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:45.989217  404800 cri.go:89] found id: ""
	I1212 20:36:45.989239  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.989246  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:45.989252  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:45.989317  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:46.017594  404800 cri.go:89] found id: ""
	I1212 20:36:46.017609  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.017616  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:46.017621  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:46.017681  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:46.047594  404800 cri.go:89] found id: ""
	I1212 20:36:46.047619  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.047628  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:46.047636  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:46.047647  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:46.113115  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:46.113137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:46.128309  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:46.128328  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:46.195035  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:46.195044  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:46.195054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:46.268896  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:46.268917  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:48.800382  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:48.810496  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:48.810556  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:48.835685  404800 cri.go:89] found id: ""
	I1212 20:36:48.835699  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.835706  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:48.835712  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:48.835772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:48.864872  404800 cri.go:89] found id: ""
	I1212 20:36:48.864892  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.864899  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:48.864904  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:48.864969  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:48.889491  404800 cri.go:89] found id: ""
	I1212 20:36:48.889505  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.889512  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:48.889517  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:48.889577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:48.914454  404800 cri.go:89] found id: ""
	I1212 20:36:48.914468  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.914474  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:48.914480  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:48.914533  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:48.938478  404800 cri.go:89] found id: ""
	I1212 20:36:48.938492  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.938499  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:48.938504  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:48.938570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:48.964129  404800 cri.go:89] found id: ""
	I1212 20:36:48.964143  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.964151  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:48.964156  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:48.964221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:48.989666  404800 cri.go:89] found id: ""
	I1212 20:36:48.989680  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.989687  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:48.989695  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:48.989705  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:49.063089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:49.063110  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:49.095579  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:49.095596  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:49.163720  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:49.163740  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:49.178328  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:49.178344  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:49.260325  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:51.761045  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:51.771641  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:51.771702  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:51.797458  404800 cri.go:89] found id: ""
	I1212 20:36:51.797472  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.797479  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:51.797484  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:51.797541  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:51.823244  404800 cri.go:89] found id: ""
	I1212 20:36:51.823268  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.823274  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:51.823279  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:51.823346  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:51.848495  404800 cri.go:89] found id: ""
	I1212 20:36:51.848509  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.848516  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:51.848520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:51.848580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:51.873152  404800 cri.go:89] found id: ""
	I1212 20:36:51.873168  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.873175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:51.873180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:51.873238  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:51.898283  404800 cri.go:89] found id: ""
	I1212 20:36:51.898297  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.898305  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:51.898310  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:51.898370  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:51.924343  404800 cri.go:89] found id: ""
	I1212 20:36:51.924358  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.924386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:51.924392  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:51.924455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:51.949330  404800 cri.go:89] found id: ""
	I1212 20:36:51.949345  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.949352  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:51.949359  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:51.949371  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:52.016304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:52.016326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:52.032963  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:52.032980  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:52.109987  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:52.109999  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:52.110012  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:52.180144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:52.180164  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:54.720069  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:54.730740  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:54.730803  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:54.758017  404800 cri.go:89] found id: ""
	I1212 20:36:54.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.758038  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:54.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:54.758105  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:54.790190  404800 cri.go:89] found id: ""
	I1212 20:36:54.790210  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.790217  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:54.790222  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:54.790281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:54.819974  404800 cri.go:89] found id: ""
	I1212 20:36:54.819989  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.819996  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:54.820001  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:54.820065  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:54.847251  404800 cri.go:89] found id: ""
	I1212 20:36:54.847265  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.847272  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:54.847277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:54.847342  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:54.873168  404800 cri.go:89] found id: ""
	I1212 20:36:54.873182  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.873190  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:54.873195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:54.873262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:54.898145  404800 cri.go:89] found id: ""
	I1212 20:36:54.898160  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.898167  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:54.898175  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:54.898237  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:54.924123  404800 cri.go:89] found id: ""
	I1212 20:36:54.924146  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.924155  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:54.924163  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:54.924173  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:54.989756  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:54.989775  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:55.021117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:55.021137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:55.090802  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:55.090816  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:55.090828  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:55.164266  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:55.164287  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:57.696458  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:57.706599  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:57.706656  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:57.732396  404800 cri.go:89] found id: ""
	I1212 20:36:57.732410  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.732420  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:57.732425  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:57.732485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:57.758017  404800 cri.go:89] found id: ""
	I1212 20:36:57.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.758039  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:57.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:57.758100  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:57.784957  404800 cri.go:89] found id: ""
	I1212 20:36:57.784971  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.784978  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:57.784983  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:57.785044  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:57.810973  404800 cri.go:89] found id: ""
	I1212 20:36:57.810986  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.810993  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:57.810999  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:57.811054  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:57.837384  404800 cri.go:89] found id: ""
	I1212 20:36:57.837398  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.837406  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:57.837411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:57.837487  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:57.863576  404800 cri.go:89] found id: ""
	I1212 20:36:57.863598  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.863605  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:57.863610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:57.863676  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:57.889215  404800 cri.go:89] found id: ""
	I1212 20:36:57.889236  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.889244  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:57.889252  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:57.889263  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:57.956054  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:57.956076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:57.970574  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:57.970590  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:58.038134  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:58.038144  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:58.038160  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:58.109516  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:58.109541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:00.640789  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:00.651136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:00.651196  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:00.678187  404800 cri.go:89] found id: ""
	I1212 20:37:00.678202  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.678209  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:00.678215  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:00.678275  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:00.703384  404800 cri.go:89] found id: ""
	I1212 20:37:00.703400  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.703407  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:00.703412  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:00.703474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:00.735999  404800 cri.go:89] found id: ""
	I1212 20:37:00.736013  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.736020  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:00.736025  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:00.736083  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:00.762232  404800 cri.go:89] found id: ""
	I1212 20:37:00.762246  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.762253  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:00.762258  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:00.762314  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:00.788575  404800 cri.go:89] found id: ""
	I1212 20:37:00.788589  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.788596  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:00.788601  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:00.788663  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:00.815050  404800 cri.go:89] found id: ""
	I1212 20:37:00.815065  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.815081  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:00.815087  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:00.815146  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:00.840166  404800 cri.go:89] found id: ""
	I1212 20:37:00.840180  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.840196  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:00.840205  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:00.840216  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:00.905766  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:00.905787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:00.920612  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:00.920631  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:00.987903  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:00.987914  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:00.987926  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:01.058125  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:01.058146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.588584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:03.599133  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:03.599202  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:03.629322  404800 cri.go:89] found id: ""
	I1212 20:37:03.629336  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.629343  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:03.629348  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:03.629410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:03.654415  404800 cri.go:89] found id: ""
	I1212 20:37:03.654429  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.654436  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:03.654443  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:03.654499  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:03.679922  404800 cri.go:89] found id: ""
	I1212 20:37:03.679937  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.679944  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:03.679950  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:03.680015  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:03.706619  404800 cri.go:89] found id: ""
	I1212 20:37:03.706634  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.706640  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:03.706646  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:03.706707  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:03.733101  404800 cri.go:89] found id: ""
	I1212 20:37:03.733116  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.733123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:03.733128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:03.733189  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:03.758431  404800 cri.go:89] found id: ""
	I1212 20:37:03.758445  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.758452  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:03.758457  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:03.758520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:03.789138  404800 cri.go:89] found id: ""
	I1212 20:37:03.789152  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.789159  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:03.789166  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:03.789177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:03.852394  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:03.852404  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:03.852415  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:03.921263  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:03.921283  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.950006  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:03.950022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:04.020715  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:04.020739  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.536553  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:06.547113  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:06.547176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:06.575862  404800 cri.go:89] found id: ""
	I1212 20:37:06.575876  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.575883  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:06.575888  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:06.575947  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:06.601781  404800 cri.go:89] found id: ""
	I1212 20:37:06.601796  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.601803  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:06.601808  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:06.601868  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:06.627486  404800 cri.go:89] found id: ""
	I1212 20:37:06.627500  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.627507  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:06.627520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:06.627577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:06.656432  404800 cri.go:89] found id: ""
	I1212 20:37:06.656446  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.656454  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:06.656465  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:06.656526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:06.681705  404800 cri.go:89] found id: ""
	I1212 20:37:06.681719  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.681726  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:06.681731  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:06.681794  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:06.707068  404800 cri.go:89] found id: ""
	I1212 20:37:06.707083  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.707090  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:06.707095  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:06.707157  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:06.734286  404800 cri.go:89] found id: ""
	I1212 20:37:06.734300  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.734307  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:06.734314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:06.734324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:06.799595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:06.799616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.814521  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:06.814543  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:06.881453  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:06.881463  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:06.881474  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:06.950345  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:06.950365  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:09.488970  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:09.500875  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:09.500940  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:09.529418  404800 cri.go:89] found id: ""
	I1212 20:37:09.529433  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.529439  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:09.529445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:09.529505  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:09.559685  404800 cri.go:89] found id: ""
	I1212 20:37:09.559700  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.559707  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:09.559712  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:09.559772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:09.587781  404800 cri.go:89] found id: ""
	I1212 20:37:09.587796  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.587802  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:09.587807  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:09.587869  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:09.613804  404800 cri.go:89] found id: ""
	I1212 20:37:09.613820  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.613826  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:09.613832  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:09.613903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:09.639550  404800 cri.go:89] found id: ""
	I1212 20:37:09.639566  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.639573  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:09.639578  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:09.639644  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:09.669938  404800 cri.go:89] found id: ""
	I1212 20:37:09.669953  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.669960  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:09.669965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:09.670025  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:09.696771  404800 cri.go:89] found id: ""
	I1212 20:37:09.696785  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.696799  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:09.696807  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:09.696818  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:09.763319  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:09.763340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:09.778782  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:09.778799  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:09.846376  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:09.846385  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:09.846396  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:09.917476  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:09.917497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.447817  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:12.457978  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:12.458042  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:12.491473  404800 cri.go:89] found id: ""
	I1212 20:37:12.491487  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.491495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:12.491500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:12.491559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:12.522865  404800 cri.go:89] found id: ""
	I1212 20:37:12.522881  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.522888  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:12.522892  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:12.522959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:12.548498  404800 cri.go:89] found id: ""
	I1212 20:37:12.548514  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.548521  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:12.548526  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:12.548592  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:12.579700  404800 cri.go:89] found id: ""
	I1212 20:37:12.579714  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.579721  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:12.579726  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:12.579791  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:12.606849  404800 cri.go:89] found id: ""
	I1212 20:37:12.606863  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.606870  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:12.606878  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:12.606942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:12.632352  404800 cri.go:89] found id: ""
	I1212 20:37:12.632386  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.632394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:12.632400  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:12.632464  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:12.657776  404800 cri.go:89] found id: ""
	I1212 20:37:12.657791  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.657798  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:12.657805  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:12.657816  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:12.672067  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:12.672083  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:12.744080  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:12.744093  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:12.744103  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:12.811395  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:12.811414  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.839843  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:12.839862  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.405601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:15.417051  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:15.417110  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:15.442503  404800 cri.go:89] found id: ""
	I1212 20:37:15.442517  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.442524  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:15.442530  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:15.442588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:15.483736  404800 cri.go:89] found id: ""
	I1212 20:37:15.483763  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.483770  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:15.483775  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:15.483843  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:15.515671  404800 cri.go:89] found id: ""
	I1212 20:37:15.515685  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.515692  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:15.515697  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:15.515764  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:15.548136  404800 cri.go:89] found id: ""
	I1212 20:37:15.548151  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.548158  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:15.548163  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:15.548221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:15.576936  404800 cri.go:89] found id: ""
	I1212 20:37:15.576951  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.576958  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:15.576962  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:15.577022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:15.603608  404800 cri.go:89] found id: ""
	I1212 20:37:15.603622  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.603629  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:15.603634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:15.603689  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:15.638105  404800 cri.go:89] found id: ""
	I1212 20:37:15.638125  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.638133  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:15.638140  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:15.638150  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.708493  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:15.708513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:15.723827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:15.723851  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:15.792302  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:15.792314  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:15.792326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:15.860772  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:15.860796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:18.397462  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:18.407317  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:18.407382  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:18.433353  404800 cri.go:89] found id: ""
	I1212 20:37:18.433368  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.433375  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:18.433379  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:18.433435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:18.465547  404800 cri.go:89] found id: ""
	I1212 20:37:18.465561  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.465568  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:18.465572  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:18.465629  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:18.498811  404800 cri.go:89] found id: ""
	I1212 20:37:18.498825  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.498832  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:18.498837  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:18.498894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:18.525729  404800 cri.go:89] found id: ""
	I1212 20:37:18.525745  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.525752  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:18.525758  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:18.525820  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:18.555807  404800 cri.go:89] found id: ""
	I1212 20:37:18.555822  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.555829  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:18.555834  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:18.555890  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:18.586968  404800 cri.go:89] found id: ""
	I1212 20:37:18.586982  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.586989  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:18.586994  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:18.587048  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:18.613654  404800 cri.go:89] found id: ""
	I1212 20:37:18.613668  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.613675  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:18.613683  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:18.613694  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:18.685435  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:18.685464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:18.701543  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:18.701560  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:18.771148  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:18.771159  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:18.771169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:18.840302  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:18.840324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.370649  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:21.380730  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:21.380785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:21.407262  404800 cri.go:89] found id: ""
	I1212 20:37:21.407277  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.407285  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:21.407290  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:21.407353  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:21.431725  404800 cri.go:89] found id: ""
	I1212 20:37:21.431741  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.431748  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:21.431753  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:21.431808  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:21.462830  404800 cri.go:89] found id: ""
	I1212 20:37:21.462844  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.462851  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:21.462856  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:21.462914  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:21.490038  404800 cri.go:89] found id: ""
	I1212 20:37:21.490053  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.490060  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:21.490066  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:21.490123  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:21.522135  404800 cri.go:89] found id: ""
	I1212 20:37:21.522152  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.522165  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:21.522170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:21.522243  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:21.550272  404800 cri.go:89] found id: ""
	I1212 20:37:21.550286  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.550293  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:21.550298  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:21.550352  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:21.575855  404800 cri.go:89] found id: ""
	I1212 20:37:21.575868  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.575875  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:21.575882  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:21.575892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:21.643213  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:21.643234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.676057  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:21.676076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:21.746870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:21.746890  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:21.762368  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:21.762383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:21.829472  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.331150  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:24.341451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:24.341509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:24.365339  404800 cri.go:89] found id: ""
	I1212 20:37:24.365354  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.365362  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:24.365367  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:24.365430  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:24.392822  404800 cri.go:89] found id: ""
	I1212 20:37:24.392837  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.392844  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:24.392849  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:24.392941  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:24.419333  404800 cri.go:89] found id: ""
	I1212 20:37:24.419347  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.419354  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:24.419365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:24.419422  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:24.444927  404800 cri.go:89] found id: ""
	I1212 20:37:24.444940  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.444947  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:24.444952  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:24.445014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:24.479382  404800 cri.go:89] found id: ""
	I1212 20:37:24.479411  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.479422  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:24.479427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:24.479496  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:24.519373  404800 cri.go:89] found id: ""
	I1212 20:37:24.519387  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.519394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:24.519399  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:24.519458  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:24.546714  404800 cri.go:89] found id: ""
	I1212 20:37:24.546729  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.546736  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:24.546744  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:24.546755  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:24.612546  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:24.612568  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:24.627419  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:24.627435  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:24.695735  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.695745  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:24.695757  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:24.764903  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:24.764929  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:27.295998  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:27.306158  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:27.306222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:27.331510  404800 cri.go:89] found id: ""
	I1212 20:37:27.331524  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.331532  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:27.331549  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:27.331608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:27.357120  404800 cri.go:89] found id: ""
	I1212 20:37:27.357134  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.357141  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:27.357146  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:27.357227  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:27.383390  404800 cri.go:89] found id: ""
	I1212 20:37:27.383404  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.383411  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:27.383416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:27.383471  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:27.408672  404800 cri.go:89] found id: ""
	I1212 20:37:27.408687  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.408695  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:27.408699  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:27.408758  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:27.434453  404800 cri.go:89] found id: ""
	I1212 20:37:27.434467  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.434478  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:27.434483  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:27.434542  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:27.467590  404800 cri.go:89] found id: ""
	I1212 20:37:27.467603  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.467610  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:27.467615  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:27.467672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:27.501872  404800 cri.go:89] found id: ""
	I1212 20:37:27.501886  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.501893  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:27.501900  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:27.501912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:27.574950  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:27.574971  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:27.590147  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:27.590163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:27.659572  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:27.659583  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:27.659594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:27.728089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:27.728111  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.260552  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:30.272906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:30.272984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:30.302879  404800 cri.go:89] found id: ""
	I1212 20:37:30.302903  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.302911  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:30.302916  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:30.302993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:30.332792  404800 cri.go:89] found id: ""
	I1212 20:37:30.332807  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.332814  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:30.332819  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:30.332877  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:30.359283  404800 cri.go:89] found id: ""
	I1212 20:37:30.359298  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.359306  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:30.359311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:30.359369  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:30.385609  404800 cri.go:89] found id: ""
	I1212 20:37:30.385624  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.385643  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:30.385649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:30.385709  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:30.410328  404800 cri.go:89] found id: ""
	I1212 20:37:30.410343  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.410358  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:30.410362  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:30.410423  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:30.435005  404800 cri.go:89] found id: ""
	I1212 20:37:30.435019  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.435026  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:30.435031  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:30.435089  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:30.474088  404800 cri.go:89] found id: ""
	I1212 20:37:30.474102  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.474109  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:30.474116  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:30.474127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.508894  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:30.508918  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:30.583876  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:30.583895  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:30.599205  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:30.599229  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:30.667713  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:30.667723  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:30.667749  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.236428  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:33.246549  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:33.246607  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:33.272236  404800 cri.go:89] found id: ""
	I1212 20:37:33.272250  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.272257  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:33.272262  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:33.272324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:33.297982  404800 cri.go:89] found id: ""
	I1212 20:37:33.297997  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.298004  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:33.298009  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:33.298068  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:33.324170  404800 cri.go:89] found id: ""
	I1212 20:37:33.324183  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.324190  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:33.324195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:33.324252  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:33.350869  404800 cri.go:89] found id: ""
	I1212 20:37:33.350883  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.350890  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:33.350895  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:33.350950  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:33.376336  404800 cri.go:89] found id: ""
	I1212 20:37:33.376352  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.376360  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:33.376384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:33.376446  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:33.402358  404800 cri.go:89] found id: ""
	I1212 20:37:33.402371  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.402378  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:33.402384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:33.402444  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:33.428067  404800 cri.go:89] found id: ""
	I1212 20:37:33.428081  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.428088  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:33.428104  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:33.428114  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.498721  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:33.498744  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:33.532343  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:33.532362  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:33.601583  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:33.601603  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:33.616929  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:33.616947  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:33.680299  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.180540  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:36.191300  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:36.191360  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:36.219483  404800 cri.go:89] found id: ""
	I1212 20:37:36.219498  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.219505  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:36.219511  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:36.219569  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:36.246240  404800 cri.go:89] found id: ""
	I1212 20:37:36.246255  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.246262  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:36.246267  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:36.246326  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:36.272949  404800 cri.go:89] found id: ""
	I1212 20:37:36.272962  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.272969  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:36.272975  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:36.273038  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:36.298716  404800 cri.go:89] found id: ""
	I1212 20:37:36.298731  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.298738  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:36.298743  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:36.298798  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:36.325228  404800 cri.go:89] found id: ""
	I1212 20:37:36.325242  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.325249  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:36.325254  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:36.325312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:36.350322  404800 cri.go:89] found id: ""
	I1212 20:37:36.350337  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.350344  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:36.350350  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:36.350406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:36.380083  404800 cri.go:89] found id: ""
	I1212 20:37:36.380097  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.380104  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:36.380117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:36.380128  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:36.442887  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.442899  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:36.442910  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:36.514571  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:36.514592  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:36.549020  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:36.549036  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:36.615002  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:36.615023  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.129960  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:39.139842  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:39.139903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:39.164988  404800 cri.go:89] found id: ""
	I1212 20:37:39.165003  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.165010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:39.165014  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:39.165072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:39.195151  404800 cri.go:89] found id: ""
	I1212 20:37:39.195166  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.195172  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:39.195177  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:39.195235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:39.223301  404800 cri.go:89] found id: ""
	I1212 20:37:39.223315  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.223322  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:39.223327  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:39.223384  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:39.248078  404800 cri.go:89] found id: ""
	I1212 20:37:39.248093  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.248100  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:39.248105  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:39.248162  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:39.272363  404800 cri.go:89] found id: ""
	I1212 20:37:39.272403  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.272411  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:39.272415  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:39.272474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:39.297353  404800 cri.go:89] found id: ""
	I1212 20:37:39.297367  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.297374  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:39.297379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:39.297437  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:39.322842  404800 cri.go:89] found id: ""
	I1212 20:37:39.322855  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.322863  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:39.322870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:39.322881  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.337445  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:39.337460  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:39.398684  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:39.398694  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:39.398704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:39.472608  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:39.472628  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:39.511488  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:39.517700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.092404  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:42.104757  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:42.104826  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:42.137172  404800 cri.go:89] found id: ""
	I1212 20:37:42.137189  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.137198  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:42.137204  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:42.137277  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:42.168320  404800 cri.go:89] found id: ""
	I1212 20:37:42.168336  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.168344  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:42.168349  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:42.168455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:42.202618  404800 cri.go:89] found id: ""
	I1212 20:37:42.202633  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.202641  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:42.202647  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:42.202714  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:42.232011  404800 cri.go:89] found id: ""
	I1212 20:37:42.232026  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.232034  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:42.232039  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:42.232101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:42.260345  404800 cri.go:89] found id: ""
	I1212 20:37:42.260360  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.260398  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:42.260403  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:42.260465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:42.286857  404800 cri.go:89] found id: ""
	I1212 20:37:42.286882  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.286890  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:42.286898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:42.286968  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:42.314846  404800 cri.go:89] found id: ""
	I1212 20:37:42.314870  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.314877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:42.314885  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:42.314898  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.382203  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:42.382223  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:42.397537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:42.397554  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:42.463930  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:42.463940  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:42.463951  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:42.539788  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:42.539809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:45.073125  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:45.091416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:45.091491  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:45.126675  404800 cri.go:89] found id: ""
	I1212 20:37:45.126699  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.126707  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:45.126714  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:45.126789  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:45.167457  404800 cri.go:89] found id: ""
	I1212 20:37:45.167475  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.167483  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:45.167489  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:45.167559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:45.226232  404800 cri.go:89] found id: ""
	I1212 20:37:45.226264  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.226292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:45.226299  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:45.226372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:45.273410  404800 cri.go:89] found id: ""
	I1212 20:37:45.273427  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.273435  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:45.273441  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:45.273513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:45.313155  404800 cri.go:89] found id: ""
	I1212 20:37:45.313171  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.313178  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:45.313183  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:45.313253  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:45.345614  404800 cri.go:89] found id: ""
	I1212 20:37:45.345640  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.345669  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:45.345688  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:45.345851  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:45.375592  404800 cri.go:89] found id: ""
	I1212 20:37:45.375606  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.375614  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:45.375622  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:45.375633  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:45.446441  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:45.446461  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:45.463226  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:45.463243  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:45.540934  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:45.540944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:45.540955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:45.610027  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:45.610051  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:48.142953  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:48.153422  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:48.153489  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:48.182170  404800 cri.go:89] found id: ""
	I1212 20:37:48.182185  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.182192  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:48.182197  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:48.182255  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:48.207474  404800 cri.go:89] found id: ""
	I1212 20:37:48.207498  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.207506  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:48.207511  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:48.207588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:48.232357  404800 cri.go:89] found id: ""
	I1212 20:37:48.232391  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.232399  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:48.232404  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:48.232472  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:48.257989  404800 cri.go:89] found id: ""
	I1212 20:37:48.258016  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.258024  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:48.258029  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:48.258095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:48.282918  404800 cri.go:89] found id: ""
	I1212 20:37:48.282932  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.282940  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:48.282945  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:48.283008  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:48.309285  404800 cri.go:89] found id: ""
	I1212 20:37:48.309299  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.309306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:48.309311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:48.309367  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:48.335545  404800 cri.go:89] found id: ""
	I1212 20:37:48.335559  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.335566  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:48.335573  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:48.335586  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:48.401770  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:48.401789  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:48.416320  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:48.416336  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:48.501926  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:48.501944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:48.501955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:48.576534  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:48.576555  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:51.105155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:51.115964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:51.116028  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:51.145401  404800 cri.go:89] found id: ""
	I1212 20:37:51.145416  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.145433  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:51.145445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:51.145517  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:51.172664  404800 cri.go:89] found id: ""
	I1212 20:37:51.172679  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.172685  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:51.172690  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:51.172753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:51.198093  404800 cri.go:89] found id: ""
	I1212 20:37:51.198108  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.198115  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:51.198120  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:51.198179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:51.223420  404800 cri.go:89] found id: ""
	I1212 20:37:51.223433  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.223449  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:51.223454  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:51.223510  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:51.253134  404800 cri.go:89] found id: ""
	I1212 20:37:51.253157  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.253164  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:51.253170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:51.253236  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:51.278738  404800 cri.go:89] found id: ""
	I1212 20:37:51.278753  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.278761  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:51.278766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:51.278821  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:51.304296  404800 cri.go:89] found id: ""
	I1212 20:37:51.304311  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.304318  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:51.304325  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:51.304346  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:51.370289  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:51.370308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:51.385101  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:51.385116  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:51.449107  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:51.449117  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:51.449127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:51.519024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:51.519047  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:54.054216  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:54.064710  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:54.064769  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:54.091620  404800 cri.go:89] found id: ""
	I1212 20:37:54.091634  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.091641  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:54.091646  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:54.091701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:54.122000  404800 cri.go:89] found id: ""
	I1212 20:37:54.122013  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.122020  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:54.122025  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:54.122081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:54.151439  404800 cri.go:89] found id: ""
	I1212 20:37:54.151454  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.151461  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:54.151466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:54.151520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:54.180154  404800 cri.go:89] found id: ""
	I1212 20:37:54.180168  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.180175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:54.180180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:54.180235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:54.206927  404800 cri.go:89] found id: ""
	I1212 20:37:54.206947  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.206954  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:54.206959  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:54.207014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:54.231274  404800 cri.go:89] found id: ""
	I1212 20:37:54.231288  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.231306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:54.231312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:54.231366  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:54.259379  404800 cri.go:89] found id: ""
	I1212 20:37:54.259395  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.259402  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:54.259410  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:54.259420  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:54.325217  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:54.325237  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:54.339913  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:54.339930  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:54.403764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:54.403774  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:54.403786  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:54.474019  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:54.474039  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.003568  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:57.016502  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:57.016560  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:57.042988  404800 cri.go:89] found id: ""
	I1212 20:37:57.043003  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.043010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:57.043015  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:57.043072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:57.071640  404800 cri.go:89] found id: ""
	I1212 20:37:57.071654  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.071661  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:57.071666  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:57.071737  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:57.098101  404800 cri.go:89] found id: ""
	I1212 20:37:57.098115  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.098123  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:57.098128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:57.098185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:57.128276  404800 cri.go:89] found id: ""
	I1212 20:37:57.128300  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.128307  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:57.128312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:57.128432  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:57.158908  404800 cri.go:89] found id: ""
	I1212 20:37:57.158922  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.158930  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:57.158939  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:57.159004  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:57.186146  404800 cri.go:89] found id: ""
	I1212 20:37:57.186161  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.186169  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:57.186174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:57.186233  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:57.210969  404800 cri.go:89] found id: ""
	I1212 20:37:57.210984  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.210991  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:57.210999  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:57.211017  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:57.225391  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:57.225407  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:57.289597  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:57.289607  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:57.289617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:57.362750  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:57.362771  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.396453  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:57.396470  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:59.967653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:59.977921  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:59.977984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:00.032267  404800 cri.go:89] found id: ""
	I1212 20:38:00.032297  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.032306  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:00.032312  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:00.032410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:00.203733  404800 cri.go:89] found id: ""
	I1212 20:38:00.203752  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.203760  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:00.203766  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:00.203831  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:00.252579  404800 cri.go:89] found id: ""
	I1212 20:38:00.252596  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.252604  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:00.252610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:00.252678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:00.301983  404800 cri.go:89] found id: ""
	I1212 20:38:00.302000  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.302009  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:00.302014  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:00.302081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:00.336785  404800 cri.go:89] found id: ""
	I1212 20:38:00.336813  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.336821  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:00.336827  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:00.336905  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:00.369703  404800 cri.go:89] found id: ""
	I1212 20:38:00.369720  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.369728  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:00.369749  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:00.369837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:00.404624  404800 cri.go:89] found id: ""
	I1212 20:38:00.404641  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.404649  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:00.404657  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:00.404669  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:00.473595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:00.473616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:00.493555  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:00.493572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:00.568400  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:00.568411  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:00.568425  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:00.641391  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:00.641416  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:03.171500  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:03.182094  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:03.182153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:03.207380  404800 cri.go:89] found id: ""
	I1212 20:38:03.207395  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.207402  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:03.207407  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:03.207465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:03.232766  404800 cri.go:89] found id: ""
	I1212 20:38:03.232781  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.232788  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:03.232793  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:03.232856  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:03.263589  404800 cri.go:89] found id: ""
	I1212 20:38:03.263604  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.263611  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:03.263620  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:03.263678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:03.289719  404800 cri.go:89] found id: ""
	I1212 20:38:03.289734  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.289741  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:03.289755  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:03.289815  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:03.316755  404800 cri.go:89] found id: ""
	I1212 20:38:03.316770  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.316778  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:03.316783  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:03.316845  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:03.344424  404800 cri.go:89] found id: ""
	I1212 20:38:03.344438  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.344445  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:03.344451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:03.344508  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:03.371242  404800 cri.go:89] found id: ""
	I1212 20:38:03.371257  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.371265  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:03.371273  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:03.371284  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:03.439155  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:03.439177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:03.456896  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:03.456912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:03.536136  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:03.536146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:03.536159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:03.610647  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:03.610666  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:06.146575  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:06.157383  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:06.157441  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:06.183306  404800 cri.go:89] found id: ""
	I1212 20:38:06.183321  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.183329  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:06.183334  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:06.183393  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:06.210325  404800 cri.go:89] found id: ""
	I1212 20:38:06.210340  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.210348  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:06.210353  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:06.210411  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:06.235611  404800 cri.go:89] found id: ""
	I1212 20:38:06.235625  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.235632  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:06.235638  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:06.235699  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:06.261846  404800 cri.go:89] found id: ""
	I1212 20:38:06.261860  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.261867  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:06.261872  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:06.261938  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:06.290103  404800 cri.go:89] found id: ""
	I1212 20:38:06.290116  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.290123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:06.290128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:06.290185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:06.316022  404800 cri.go:89] found id: ""
	I1212 20:38:06.316037  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.316044  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:06.316049  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:06.316107  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:06.342973  404800 cri.go:89] found id: ""
	I1212 20:38:06.342988  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.342996  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:06.343004  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:06.343015  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:06.413249  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:06.413270  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:06.428467  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:06.428492  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:06.521492  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:06.521503  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:06.521513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:06.591077  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:06.591100  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.125976  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:09.136849  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:09.136908  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:09.163513  404800 cri.go:89] found id: ""
	I1212 20:38:09.163528  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.163535  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:09.163541  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:09.163603  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:09.194011  404800 cri.go:89] found id: ""
	I1212 20:38:09.194026  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.194033  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:09.194038  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:09.194098  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:09.223187  404800 cri.go:89] found id: ""
	I1212 20:38:09.223201  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.223214  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:09.223219  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:09.223278  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:09.253410  404800 cri.go:89] found id: ""
	I1212 20:38:09.253424  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.253431  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:09.253436  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:09.253509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:09.278330  404800 cri.go:89] found id: ""
	I1212 20:38:09.278344  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.278351  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:09.278356  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:09.278416  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:09.307840  404800 cri.go:89] found id: ""
	I1212 20:38:09.307854  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.307861  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:09.307866  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:09.307924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:09.335632  404800 cri.go:89] found id: ""
	I1212 20:38:09.335646  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.335653  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:09.335660  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:09.335671  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:09.406024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:09.406045  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.434314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:09.434331  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:09.515858  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:09.515880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:09.532868  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:09.532885  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:09.599150  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.099436  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:12.110285  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:12.110345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:12.135810  404800 cri.go:89] found id: ""
	I1212 20:38:12.135825  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.135832  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:12.135837  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:12.135897  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:12.160429  404800 cri.go:89] found id: ""
	I1212 20:38:12.160444  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.160451  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:12.160456  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:12.160511  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:12.187065  404800 cri.go:89] found id: ""
	I1212 20:38:12.187080  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.187087  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:12.187092  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:12.187154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:12.212658  404800 cri.go:89] found id: ""
	I1212 20:38:12.212673  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.212681  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:12.212686  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:12.212743  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:12.238821  404800 cri.go:89] found id: ""
	I1212 20:38:12.238836  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.238843  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:12.238848  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:12.238909  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:12.265300  404800 cri.go:89] found id: ""
	I1212 20:38:12.265315  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.265322  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:12.265332  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:12.265392  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:12.292396  404800 cri.go:89] found id: ""
	I1212 20:38:12.292410  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.292418  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:12.292435  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:12.292445  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:12.358716  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:12.358736  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:12.374039  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:12.374056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:12.438679  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.438690  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:12.438701  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:12.519199  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:12.519218  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:15.058664  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:15.078525  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:15.078590  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:15.105060  404800 cri.go:89] found id: ""
	I1212 20:38:15.105075  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.105082  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:15.105088  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:15.105153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:15.133041  404800 cri.go:89] found id: ""
	I1212 20:38:15.133056  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.133063  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:15.133068  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:15.133133  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:15.160326  404800 cri.go:89] found id: ""
	I1212 20:38:15.160340  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.160347  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:15.160353  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:15.160435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:15.187814  404800 cri.go:89] found id: ""
	I1212 20:38:15.187828  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.187835  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:15.187840  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:15.187900  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:15.227819  404800 cri.go:89] found id: ""
	I1212 20:38:15.227833  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.227839  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:15.227844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:15.227901  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:15.255383  404800 cri.go:89] found id: ""
	I1212 20:38:15.255398  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.255404  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:15.255410  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:15.255468  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:15.280977  404800 cri.go:89] found id: ""
	I1212 20:38:15.280991  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.280997  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:15.281005  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:15.281022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:15.347810  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:15.347832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:15.362524  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:15.362541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:15.427106  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:15.427116  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:15.427127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:15.497224  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:15.497244  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:18.029289  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:18.044111  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:18.044210  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:18.071723  404800 cri.go:89] found id: ""
	I1212 20:38:18.071737  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.071745  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:18.071750  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:18.071810  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:18.099105  404800 cri.go:89] found id: ""
	I1212 20:38:18.099119  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.099126  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:18.099131  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:18.099187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:18.123656  404800 cri.go:89] found id: ""
	I1212 20:38:18.123670  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.123677  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:18.123682  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:18.123739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:18.150020  404800 cri.go:89] found id: ""
	I1212 20:38:18.150033  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.150040  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:18.150045  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:18.150101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:18.174527  404800 cri.go:89] found id: ""
	I1212 20:38:18.174541  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.174548  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:18.174552  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:18.174608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:18.198686  404800 cri.go:89] found id: ""
	I1212 20:38:18.198701  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.198716  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:18.198722  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:18.198779  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:18.223482  404800 cri.go:89] found id: ""
	I1212 20:38:18.223496  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.223512  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:18.223521  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:18.223531  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:18.289154  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:18.289176  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:18.303954  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:18.303970  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:18.371467  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:18.371477  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:18.371493  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:18.440117  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:18.440138  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:20.983282  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:20.993766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:20.993829  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:21.020992  404800 cri.go:89] found id: ""
	I1212 20:38:21.021006  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.021014  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:21.021019  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:21.021081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:21.047844  404800 cri.go:89] found id: ""
	I1212 20:38:21.047857  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.047865  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:21.047869  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:21.047930  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:21.073011  404800 cri.go:89] found id: ""
	I1212 20:38:21.073025  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.073033  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:21.073038  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:21.073095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:21.098802  404800 cri.go:89] found id: ""
	I1212 20:38:21.098816  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.098823  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:21.098829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:21.098884  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:21.127579  404800 cri.go:89] found id: ""
	I1212 20:38:21.127594  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.127601  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:21.127606  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:21.127672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:21.154921  404800 cri.go:89] found id: ""
	I1212 20:38:21.154935  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.154942  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:21.154947  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:21.155001  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:21.181275  404800 cri.go:89] found id: ""
	I1212 20:38:21.181290  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.181297  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:21.181304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:21.181316  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:21.197100  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:21.197118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:21.263963  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:21.263974  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:21.263991  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:21.335974  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:21.335994  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:21.364201  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:21.364220  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:23.937090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:23.947413  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:23.947474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:23.973243  404800 cri.go:89] found id: ""
	I1212 20:38:23.973258  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.973265  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:23.973270  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:23.973324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:23.999530  404800 cri.go:89] found id: ""
	I1212 20:38:23.999545  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.999552  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:23.999557  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:23.999616  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:24.030165  404800 cri.go:89] found id: ""
	I1212 20:38:24.030180  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.030187  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:24.030193  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:24.030254  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:24.059776  404800 cri.go:89] found id: ""
	I1212 20:38:24.059792  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.059799  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:24.059804  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:24.059882  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:24.086292  404800 cri.go:89] found id: ""
	I1212 20:38:24.086306  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.086330  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:24.086338  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:24.086427  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:24.112150  404800 cri.go:89] found id: ""
	I1212 20:38:24.112164  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.112180  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:24.112185  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:24.112240  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:24.137517  404800 cri.go:89] found id: ""
	I1212 20:38:24.137532  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.137539  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:24.137547  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:24.137557  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:24.207037  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:24.207056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:24.222129  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:24.222144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:24.288581  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:24.288595  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:24.288605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:24.357884  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:24.357903  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:26.887217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:26.897518  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:26.897580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:26.926965  404800 cri.go:89] found id: ""
	I1212 20:38:26.926980  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.926987  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:26.926992  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:26.927052  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:26.952974  404800 cri.go:89] found id: ""
	I1212 20:38:26.952988  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.952995  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:26.953000  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:26.953060  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:26.978786  404800 cri.go:89] found id: ""
	I1212 20:38:26.978801  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.978808  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:26.978813  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:26.978870  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:27.008564  404800 cri.go:89] found id: ""
	I1212 20:38:27.008580  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.008590  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:27.008595  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:27.008659  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:27.036286  404800 cri.go:89] found id: ""
	I1212 20:38:27.036301  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.036308  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:27.036313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:27.036391  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:27.061515  404800 cri.go:89] found id: ""
	I1212 20:38:27.061529  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.061536  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:27.061541  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:27.061604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:27.090603  404800 cri.go:89] found id: ""
	I1212 20:38:27.090617  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.090624  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:27.090632  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:27.090642  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:27.159097  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:27.159107  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:27.159118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:27.228300  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:27.228321  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:27.258850  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:27.258867  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:27.328117  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:27.328139  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:29.843406  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:29.853466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:29.853526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:29.878238  404800 cri.go:89] found id: ""
	I1212 20:38:29.878253  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.878260  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:29.878265  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:29.878323  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:29.907469  404800 cri.go:89] found id: ""
	I1212 20:38:29.907483  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.907490  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:29.907495  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:29.907550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:29.932873  404800 cri.go:89] found id: ""
	I1212 20:38:29.932887  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.932894  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:29.932900  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:29.932962  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:29.958139  404800 cri.go:89] found id: ""
	I1212 20:38:29.958153  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.958160  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:29.958165  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:29.958222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:29.984390  404800 cri.go:89] found id: ""
	I1212 20:38:29.984405  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.984412  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:29.984416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:29.984474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:30.027335  404800 cri.go:89] found id: ""
	I1212 20:38:30.027351  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.027360  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:30.027365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:30.027440  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:30.094850  404800 cri.go:89] found id: ""
	I1212 20:38:30.094867  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.094883  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:30.094911  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:30.094939  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:30.129199  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:30.129217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:30.196813  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:30.196832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:30.212809  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:30.212829  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:30.281108  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:30.281119  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:30.281130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:32.853025  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:32.863369  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:32.863434  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:32.890487  404800 cri.go:89] found id: ""
	I1212 20:38:32.890501  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.890508  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:32.890513  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:32.890570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:32.915071  404800 cri.go:89] found id: ""
	I1212 20:38:32.915085  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.915093  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:32.915098  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:32.915155  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:32.940096  404800 cri.go:89] found id: ""
	I1212 20:38:32.940117  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.940131  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:32.940142  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:32.940234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:32.965615  404800 cri.go:89] found id: ""
	I1212 20:38:32.965629  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.965644  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:32.965649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:32.965705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:32.990438  404800 cri.go:89] found id: ""
	I1212 20:38:32.990452  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.990459  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:32.990466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:32.990527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:33.018112  404800 cri.go:89] found id: ""
	I1212 20:38:33.018134  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.018141  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:33.018146  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:33.018213  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:33.045014  404800 cri.go:89] found id: ""
	I1212 20:38:33.045029  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.045036  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:33.045043  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:33.045054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:33.116627  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:33.116649  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:33.131589  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:33.131605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:33.200143  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:33.200152  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:33.200165  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:33.270338  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:33.270359  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:35.806115  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:35.816131  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:35.816187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:35.841646  404800 cri.go:89] found id: ""
	I1212 20:38:35.841660  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.841667  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:35.841672  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:35.841728  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:35.871233  404800 cri.go:89] found id: ""
	I1212 20:38:35.871247  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.871254  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:35.871259  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:35.871316  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:35.896270  404800 cri.go:89] found id: ""
	I1212 20:38:35.896285  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.896292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:35.896297  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:35.896354  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:35.923679  404800 cri.go:89] found id: ""
	I1212 20:38:35.923693  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.923700  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:35.923705  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:35.923796  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:35.950841  404800 cri.go:89] found id: ""
	I1212 20:38:35.950856  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.950862  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:35.950867  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:35.950924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:35.981198  404800 cri.go:89] found id: ""
	I1212 20:38:35.981212  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.981219  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:35.981224  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:35.981282  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:36.016848  404800 cri.go:89] found id: ""
	I1212 20:38:36.016865  404800 logs.go:282] 0 containers: []
	W1212 20:38:36.016872  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:36.016881  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:36.016892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:36.085541  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:36.085562  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:36.100886  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:36.100904  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:36.169874  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:36.169886  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:36.169897  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:36.239866  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:36.239886  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:38.770757  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:38.781375  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:38.781433  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:38.809421  404800 cri.go:89] found id: ""
	I1212 20:38:38.809436  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.809443  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:38.809448  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:38.809506  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:38.839566  404800 cri.go:89] found id: ""
	I1212 20:38:38.839579  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.839586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:38.839591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:38.839652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:38.865187  404800 cri.go:89] found id: ""
	I1212 20:38:38.865201  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.865208  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:38.865213  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:38.865272  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:38.890808  404800 cri.go:89] found id: ""
	I1212 20:38:38.890822  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.890829  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:38.890835  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:38.890891  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:38.917091  404800 cri.go:89] found id: ""
	I1212 20:38:38.917104  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.917117  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:38.917122  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:38.917179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:38.942942  404800 cri.go:89] found id: ""
	I1212 20:38:38.942957  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.942964  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:38.942970  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:38.943030  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:38.973257  404800 cri.go:89] found id: ""
	I1212 20:38:38.973271  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.973278  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:38.973286  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:38.973296  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:39.043336  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:39.043356  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:39.072568  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:39.072588  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:39.140916  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:39.140937  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:39.157933  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:39.157949  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:39.223417  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:41.723637  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:41.734660  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:41.734716  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:41.767247  404800 cri.go:89] found id: ""
	I1212 20:38:41.767262  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.767269  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:41.767275  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:41.767328  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:41.796221  404800 cri.go:89] found id: ""
	I1212 20:38:41.796235  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.796248  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:41.796253  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:41.796312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:41.821187  404800 cri.go:89] found id: ""
	I1212 20:38:41.821203  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.821216  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:41.821221  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:41.821284  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:41.847287  404800 cri.go:89] found id: ""
	I1212 20:38:41.847301  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.847308  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:41.847313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:41.847372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:41.872067  404800 cri.go:89] found id: ""
	I1212 20:38:41.872082  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.872089  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:41.872093  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:41.872152  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:41.897796  404800 cri.go:89] found id: ""
	I1212 20:38:41.897811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.897818  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:41.897823  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:41.897881  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:41.923795  404800 cri.go:89] found id: ""
	I1212 20:38:41.923811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.923818  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:41.923825  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:41.923836  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:41.990470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:41.990491  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:42.009111  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:42.009130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:42.088409  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:42.088421  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:42.088433  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:42.192507  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:42.192534  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:44.727139  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:44.739542  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:44.739600  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:44.773501  404800 cri.go:89] found id: ""
	I1212 20:38:44.773515  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.773522  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:44.773527  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:44.773589  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:44.800128  404800 cri.go:89] found id: ""
	I1212 20:38:44.800142  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.800149  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:44.800154  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:44.800211  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:44.825549  404800 cri.go:89] found id: ""
	I1212 20:38:44.825563  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.825571  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:44.825576  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:44.825641  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:44.851616  404800 cri.go:89] found id: ""
	I1212 20:38:44.851630  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.851637  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:44.851642  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:44.851701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:44.877278  404800 cri.go:89] found id: ""
	I1212 20:38:44.877293  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.877300  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:44.877305  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:44.877365  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:44.905623  404800 cri.go:89] found id: ""
	I1212 20:38:44.905637  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.905644  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:44.905649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:44.905705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:44.931299  404800 cri.go:89] found id: ""
	I1212 20:38:44.931313  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.931319  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:44.931327  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:44.931338  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:44.998840  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:44.998865  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:45.080550  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:45.080572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:45.173764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:45.173775  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:45.173787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:45.264449  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:45.264506  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:47.816513  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:47.826919  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:47.826978  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:47.856068  404800 cri.go:89] found id: ""
	I1212 20:38:47.856083  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.856090  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:47.856095  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:47.856154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:47.883508  404800 cri.go:89] found id: ""
	I1212 20:38:47.883522  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.883529  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:47.883534  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:47.883595  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:47.909513  404800 cri.go:89] found id: ""
	I1212 20:38:47.909527  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.909534  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:47.909539  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:47.909617  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:47.939000  404800 cri.go:89] found id: ""
	I1212 20:38:47.939015  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.939022  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:47.939027  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:47.939084  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:47.965875  404800 cri.go:89] found id: ""
	I1212 20:38:47.965889  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.965897  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:47.965902  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:47.965975  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:47.992041  404800 cri.go:89] found id: ""
	I1212 20:38:47.992056  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.992063  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:47.992068  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:47.992127  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:48.022837  404800 cri.go:89] found id: ""
	I1212 20:38:48.022852  404800 logs.go:282] 0 containers: []
	W1212 20:38:48.022860  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:48.022867  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:48.022880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:48.039393  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:48.039410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:48.107317  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:48.107328  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:48.107340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:48.175841  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:48.175861  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:48.210572  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:48.210594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:50.783090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:50.796736  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:50.796840  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:50.825233  404800 cri.go:89] found id: ""
	I1212 20:38:50.825248  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.825255  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:50.825261  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:50.825319  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:50.852180  404800 cri.go:89] found id: ""
	I1212 20:38:50.852194  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.852201  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:50.852206  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:50.852262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:50.878747  404800 cri.go:89] found id: ""
	I1212 20:38:50.878763  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.878770  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:50.878775  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:50.878835  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:50.904522  404800 cri.go:89] found id: ""
	I1212 20:38:50.904536  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.904543  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:50.904548  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:50.904604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:50.931344  404800 cri.go:89] found id: ""
	I1212 20:38:50.931360  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.931367  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:50.931372  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:50.931428  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:50.957483  404800 cri.go:89] found id: ""
	I1212 20:38:50.957498  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.957505  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:50.957510  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:50.957568  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:50.982756  404800 cri.go:89] found id: ""
	I1212 20:38:50.982771  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.982778  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:50.982785  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:50.982796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:51.050968  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:51.050990  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:51.066537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:51.066556  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:51.139075  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:51.139089  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:51.139101  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:51.210713  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:51.210734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.744531  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:53.755115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:53.755176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:53.782428  404800 cri.go:89] found id: ""
	I1212 20:38:53.782443  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.782450  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:53.782455  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:53.782513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:53.809102  404800 cri.go:89] found id: ""
	I1212 20:38:53.809116  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.809123  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:53.809128  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:53.809188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:53.836479  404800 cri.go:89] found id: ""
	I1212 20:38:53.836492  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.836500  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:53.836505  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:53.836567  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:53.862110  404800 cri.go:89] found id: ""
	I1212 20:38:53.862124  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.862131  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:53.862136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:53.862193  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:53.888092  404800 cri.go:89] found id: ""
	I1212 20:38:53.888112  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.888119  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:53.888124  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:53.888188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:53.918381  404800 cri.go:89] found id: ""
	I1212 20:38:53.918412  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.918419  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:53.918425  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:53.918482  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:53.944685  404800 cri.go:89] found id: ""
	I1212 20:38:53.944700  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.944707  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:53.944715  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:53.944726  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.976361  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:53.976398  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:54.043617  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:54.043638  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:54.059716  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:54.059735  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:54.127525  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:54.127535  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:54.127550  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:56.697671  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:56.712906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:56.712987  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:56.745699  404800 cri.go:89] found id: ""
	I1212 20:38:56.745713  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.745721  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:56.745726  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:56.745780  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:56.774995  404800 cri.go:89] found id: ""
	I1212 20:38:56.775008  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.775015  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:56.775022  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:56.775076  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:56.801088  404800 cri.go:89] found id: ""
	I1212 20:38:56.801102  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.801109  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:56.801115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:56.801171  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:56.825939  404800 cri.go:89] found id: ""
	I1212 20:38:56.825953  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.825960  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:56.825965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:56.826020  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:56.851013  404800 cri.go:89] found id: ""
	I1212 20:38:56.851028  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.851035  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:56.851040  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:56.851099  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:56.875791  404800 cri.go:89] found id: ""
	I1212 20:38:56.875815  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.875823  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:56.875829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:56.875894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:56.902106  404800 cri.go:89] found id: ""
	I1212 20:38:56.902121  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.902128  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:56.902136  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:56.902146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:56.933095  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:56.933112  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:56.999748  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:56.999770  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:57.023866  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:57.023882  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:57.095113  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:57.095123  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:57.095133  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:59.665770  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:59.675717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:59.675792  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:59.701606  404800 cri.go:89] found id: ""
	I1212 20:38:59.701620  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.701626  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:59.701631  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:59.701688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:59.736582  404800 cri.go:89] found id: ""
	I1212 20:38:59.736597  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.736603  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:59.736609  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:59.736666  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:59.764566  404800 cri.go:89] found id: ""
	I1212 20:38:59.764588  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.764595  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:59.764602  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:59.764664  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:59.793759  404800 cri.go:89] found id: ""
	I1212 20:38:59.793774  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.793781  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:59.793786  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:59.793858  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:59.821810  404800 cri.go:89] found id: ""
	I1212 20:38:59.821824  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.821841  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:59.821846  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:59.821903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:59.851583  404800 cri.go:89] found id: ""
	I1212 20:38:59.851606  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.851614  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:59.851619  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:59.851688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:59.878726  404800 cri.go:89] found id: ""
	I1212 20:38:59.878740  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.878746  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:59.878754  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:59.878764  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:59.943708  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:59.943728  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:59.958686  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:59.958704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:00.056135  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:00.056146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:00.056159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:00.155066  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:00.155091  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:02.718200  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:02.729492  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:02.729550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:02.760544  404800 cri.go:89] found id: ""
	I1212 20:39:02.760559  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.760566  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:02.760571  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:02.760635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:02.792146  404800 cri.go:89] found id: ""
	I1212 20:39:02.792161  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.792174  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:02.792180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:02.792239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:02.818586  404800 cri.go:89] found id: ""
	I1212 20:39:02.818601  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.818609  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:02.818614  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:02.818678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:02.844172  404800 cri.go:89] found id: ""
	I1212 20:39:02.844187  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.844194  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:02.844199  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:02.844256  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:02.871047  404800 cri.go:89] found id: ""
	I1212 20:39:02.871061  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.871069  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:02.871074  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:02.871132  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:02.898048  404800 cri.go:89] found id: ""
	I1212 20:39:02.898062  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.898070  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:02.898075  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:02.898131  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:02.923194  404800 cri.go:89] found id: ""
	I1212 20:39:02.923209  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.923216  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:02.923224  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:02.923234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:02.988912  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:02.988932  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:03.004362  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:03.004410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:03.075259  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:03.075269  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:03.075280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:03.148856  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:03.148876  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:05.677035  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:05.686903  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:05.686961  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:05.722182  404800 cri.go:89] found id: ""
	I1212 20:39:05.722197  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.722204  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:05.722211  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:05.722309  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:05.756818  404800 cri.go:89] found id: ""
	I1212 20:39:05.756832  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.756839  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:05.756844  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:05.756946  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:05.785780  404800 cri.go:89] found id: ""
	I1212 20:39:05.785794  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.785801  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:05.785806  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:05.785862  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:05.816052  404800 cri.go:89] found id: ""
	I1212 20:39:05.816066  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.816073  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:05.816078  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:05.816134  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:05.841695  404800 cri.go:89] found id: ""
	I1212 20:39:05.841709  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.841716  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:05.841721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:05.841782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:05.868902  404800 cri.go:89] found id: ""
	I1212 20:39:05.868917  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.868924  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:05.868929  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:05.868998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:05.898574  404800 cri.go:89] found id: ""
	I1212 20:39:05.898589  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.898596  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:05.898603  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:05.898617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:05.966027  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:05.966048  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:05.980827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:05.980843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:06.048518  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:06.048528  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:06.048539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:06.118539  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:06.118566  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:08.648618  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:08.659086  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:08.659147  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:08.684568  404800 cri.go:89] found id: ""
	I1212 20:39:08.684583  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.684590  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:08.684595  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:08.684655  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:08.714848  404800 cri.go:89] found id: ""
	I1212 20:39:08.714862  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.714869  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:08.714873  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:08.714942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:08.749610  404800 cri.go:89] found id: ""
	I1212 20:39:08.749636  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.749643  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:08.749654  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:08.749720  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:08.780856  404800 cri.go:89] found id: ""
	I1212 20:39:08.780871  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.780878  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:08.780883  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:08.780943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:08.805202  404800 cri.go:89] found id: ""
	I1212 20:39:08.805216  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.805223  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:08.805228  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:08.805287  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:08.830301  404800 cri.go:89] found id: ""
	I1212 20:39:08.830317  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.830324  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:08.830329  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:08.830389  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:08.857083  404800 cri.go:89] found id: ""
	I1212 20:39:08.857098  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.857105  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:08.857113  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:08.857124  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:08.925442  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:08.925464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:08.940523  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:08.940539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:09.013233  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:09.013243  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:09.013254  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:09.085178  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:09.085198  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.613987  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:11.624006  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:11.624073  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:11.648868  404800 cri.go:89] found id: ""
	I1212 20:39:11.648883  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.648890  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:11.648902  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:11.648959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:11.673750  404800 cri.go:89] found id: ""
	I1212 20:39:11.673764  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.673771  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:11.673776  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:11.673837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:11.701310  404800 cri.go:89] found id: ""
	I1212 20:39:11.701324  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.701340  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:11.701347  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:11.701407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:11.728807  404800 cri.go:89] found id: ""
	I1212 20:39:11.728821  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.728828  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:11.728833  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:11.728898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:11.762671  404800 cri.go:89] found id: ""
	I1212 20:39:11.762706  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.762715  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:11.762720  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:11.762786  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:11.788450  404800 cri.go:89] found id: ""
	I1212 20:39:11.788481  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.788488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:11.788493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:11.788559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:11.816693  404800 cri.go:89] found id: ""
	I1212 20:39:11.816707  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.816714  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:11.816722  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:11.816732  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:11.886583  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:11.886593  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:11.886604  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:11.955026  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:11.955046  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.984471  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:11.984489  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:12.054196  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:12.054217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.569266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:14.579178  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:14.579234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:14.603297  404800 cri.go:89] found id: ""
	I1212 20:39:14.603312  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.603319  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:14.603324  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:14.603381  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:14.628304  404800 cri.go:89] found id: ""
	I1212 20:39:14.628318  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.628325  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:14.628330  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:14.628404  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:14.653112  404800 cri.go:89] found id: ""
	I1212 20:39:14.653126  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.653133  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:14.653138  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:14.653201  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:14.678048  404800 cri.go:89] found id: ""
	I1212 20:39:14.678063  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.678078  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:14.678083  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:14.678141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:14.710561  404800 cri.go:89] found id: ""
	I1212 20:39:14.710584  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.710592  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:14.710597  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:14.710662  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:14.744837  404800 cri.go:89] found id: ""
	I1212 20:39:14.744862  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.744870  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:14.744876  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:14.744943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:14.777906  404800 cri.go:89] found id: ""
	I1212 20:39:14.777920  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.777927  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:14.777936  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:14.777946  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:14.844303  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:14.844323  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.859158  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:14.859179  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:14.922392  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:14.922427  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:14.922438  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:14.992900  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:14.992920  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:17.545196  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:17.555712  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:17.555785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:17.582444  404800 cri.go:89] found id: ""
	I1212 20:39:17.582458  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.582465  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:17.582470  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:17.582527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:17.606892  404800 cri.go:89] found id: ""
	I1212 20:39:17.606906  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.606926  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:17.606932  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:17.606998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:17.631824  404800 cri.go:89] found id: ""
	I1212 20:39:17.631840  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.631846  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:17.631851  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:17.631906  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:17.658525  404800 cri.go:89] found id: ""
	I1212 20:39:17.658540  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.658548  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:17.658553  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:17.658610  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:17.687764  404800 cri.go:89] found id: ""
	I1212 20:39:17.687777  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.687784  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:17.687789  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:17.687844  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:17.720465  404800 cri.go:89] found id: ""
	I1212 20:39:17.720480  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.720488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:17.720493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:17.720561  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:17.758231  404800 cri.go:89] found id: ""
	I1212 20:39:17.758245  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.758261  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:17.758270  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:17.758281  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:17.838248  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:17.838280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:17.852734  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:17.852752  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:17.918178  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:17.918190  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:17.918202  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:17.985880  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:17.985901  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:20.529812  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:20.539894  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:20.539954  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:20.564821  404800 cri.go:89] found id: ""
	I1212 20:39:20.564834  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.564841  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:20.564846  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:20.564903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:20.594524  404800 cri.go:89] found id: ""
	I1212 20:39:20.594538  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.594544  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:20.594549  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:20.594606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:20.619997  404800 cri.go:89] found id: ""
	I1212 20:39:20.620011  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.620018  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:20.620023  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:20.620079  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:20.644542  404800 cri.go:89] found id: ""
	I1212 20:39:20.644557  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.644564  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:20.644569  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:20.644624  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:20.670273  404800 cri.go:89] found id: ""
	I1212 20:39:20.670289  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.670296  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:20.670302  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:20.670358  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:20.694691  404800 cri.go:89] found id: ""
	I1212 20:39:20.694705  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.694712  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:20.694717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:20.694771  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:20.739770  404800 cri.go:89] found id: ""
	I1212 20:39:20.739784  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.739791  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:20.739798  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:20.739809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:20.810407  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:20.810429  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:20.825194  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:20.825210  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:20.899009  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:20.899020  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:20.899032  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:20.977107  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:20.977129  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:23.510601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:23.521033  404800 kubeadm.go:602] duration metric: took 4m3.32729864s to restartPrimaryControlPlane
	W1212 20:39:23.521093  404800 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:39:23.521166  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:39:23.936973  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:39:23.949604  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:39:23.957638  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:39:23.957691  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:39:23.965470  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:39:23.965481  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:39:23.965536  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:39:23.973241  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:39:23.973300  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:39:23.980875  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:39:23.989722  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:39:23.989777  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:39:23.997778  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.007027  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:39:24.007112  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.016721  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:39:24.025622  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:39:24.025690  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:39:24.034033  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:39:24.077877  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:39:24.079077  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:39:24.152874  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:39:24.152937  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:39:24.152972  404800 kubeadm.go:319] OS: Linux
	I1212 20:39:24.153034  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:39:24.153081  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:39:24.153126  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:39:24.153178  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:39:24.153225  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:39:24.153271  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:39:24.153314  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:39:24.153363  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:39:24.153407  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:39:24.219483  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:39:24.219589  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:39:24.219678  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:39:24.228954  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:39:24.234481  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:39:24.234574  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:39:24.234638  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:39:24.234713  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:39:24.234772  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:39:24.234841  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:39:24.234896  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:39:24.234958  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:39:24.235017  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:39:24.235090  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:39:24.235172  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:39:24.235208  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:39:24.235263  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:39:24.294876  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:39:24.534877  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:39:24.632916  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:39:24.763704  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:39:25.183116  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:39:25.183864  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:39:25.186637  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:39:25.190125  404800 out.go:252]   - Booting up control plane ...
	I1212 20:39:25.190229  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:39:25.190325  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:39:25.190412  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:39:25.205322  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:39:25.205427  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:39:25.215814  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:39:25.216163  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:39:25.216236  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:39:25.353073  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:39:25.353188  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:43:25.353162  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000280513s
	I1212 20:43:25.353205  404800 kubeadm.go:319] 
	I1212 20:43:25.353282  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:43:25.353332  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:43:25.353453  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:43:25.353461  404800 kubeadm.go:319] 
	I1212 20:43:25.353609  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:43:25.353657  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:43:25.353688  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:43:25.353691  404800 kubeadm.go:319] 
	I1212 20:43:25.359119  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:43:25.359579  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:43:25.359715  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:43:25.360004  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:43:25.360010  404800 kubeadm.go:319] 
	I1212 20:43:25.360149  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:43:25.360245  404800 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000280513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:43:25.360353  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:43:25.770646  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:43:25.783563  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:43:25.783624  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:43:25.791806  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:43:25.791814  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:43:25.791862  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:43:25.799745  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:43:25.799799  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:43:25.807302  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:43:25.815035  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:43:25.815084  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:43:25.822960  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.831068  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:43:25.831122  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.838463  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:43:25.846379  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:43:25.846433  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:43:25.853821  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:43:25.894714  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:43:25.895009  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:43:25.961164  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:43:25.961230  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:43:25.961265  404800 kubeadm.go:319] OS: Linux
	I1212 20:43:25.961309  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:43:25.961355  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:43:25.961404  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:43:25.961451  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:43:25.961498  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:43:25.961544  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:43:25.961587  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:43:25.961634  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:43:25.961678  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:43:26.029509  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:43:26.029612  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:43:26.029701  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:43:26.038278  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:43:26.041933  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:43:26.042043  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:43:26.042118  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:43:26.042200  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:43:26.042265  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:43:26.042338  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:43:26.042395  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:43:26.042462  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:43:26.042527  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:43:26.042606  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:43:26.042683  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:43:26.042722  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:43:26.042781  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:43:26.129341  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:43:26.328670  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:43:26.553215  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:43:26.647700  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:43:26.895572  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:43:26.896106  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:43:26.898924  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:43:26.902076  404800 out.go:252]   - Booting up control plane ...
	I1212 20:43:26.902180  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:43:26.902266  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:43:26.902331  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:43:26.916276  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:43:26.916395  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:43:26.923968  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:43:26.925348  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:43:26.925393  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:43:27.058187  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:43:27.058300  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:47:27.059387  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001189054s
	I1212 20:47:27.059415  404800 kubeadm.go:319] 
	I1212 20:47:27.059512  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:47:27.059567  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:47:27.059889  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:47:27.059895  404800 kubeadm.go:319] 
	I1212 20:47:27.060100  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:47:27.060426  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:47:27.060479  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:47:27.060483  404800 kubeadm.go:319] 
	I1212 20:47:27.064619  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:47:27.065062  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:47:27.065168  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:47:27.065401  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:47:27.065405  404800 kubeadm.go:319] 
	I1212 20:47:27.065522  404800 kubeadm.go:403] duration metric: took 12m6.90957682s to StartCluster
	I1212 20:47:27.065550  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:47:27.065606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:47:27.065669  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:47:27.091473  404800 cri.go:89] found id: ""
	I1212 20:47:27.091488  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.091495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:47:27.091500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:47:27.091559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:47:27.118055  404800 cri.go:89] found id: ""
	I1212 20:47:27.118069  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.118076  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:47:27.118081  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:47:27.118141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:47:27.144553  404800 cri.go:89] found id: ""
	I1212 20:47:27.144567  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.144574  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:47:27.144579  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:47:27.144636  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:47:27.170138  404800 cri.go:89] found id: ""
	I1212 20:47:27.170152  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.170172  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:47:27.170177  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:47:27.170242  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:47:27.199222  404800 cri.go:89] found id: ""
	I1212 20:47:27.199236  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.199243  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:47:27.199248  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:47:27.199305  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:47:27.225906  404800 cri.go:89] found id: ""
	I1212 20:47:27.225921  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.225929  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:47:27.225934  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:47:27.225993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:47:27.251774  404800 cri.go:89] found id: ""
	I1212 20:47:27.251788  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.251795  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:47:27.251803  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:47:27.251843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:47:27.318965  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:47:27.318984  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:47:27.336153  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:47:27.336169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:47:27.403235  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:47:27.403245  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:47:27.403256  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:47:27.475348  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:47:27.475369  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 20:47:27.504551  404800 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:47:27.504592  404800 out.go:285] * 
	W1212 20:47:27.504699  404800 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.504759  404800 out.go:285] * 
	W1212 20:47:27.507341  404800 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:47:27.514164  404800 out.go:203] 
	W1212 20:47:27.517009  404800 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.517056  404800 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:47:27.517078  404800 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:47:27.520151  404800 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617557022Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617594914Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617644933Z" level=info msg="Create NRI interface"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617744979Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617956402Z" level=info msg="runtime interface created"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617981551Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617990003Z" level=info msg="runtime interface starting up..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618002294Z" level=info msg="starting plugins..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618017146Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618092166Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:35:18 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223066755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=efc21d87-a1b0-4de5-a48b-a3e014a5db32 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223827337Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e9bb6f76-9bf0-445e-a911-5989a7f224b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224384709Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=eb32b7e0-d164-45f4-be96-6799b271663a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224808771Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=192a05d5-754c-4620-9a7e-630a23b2f5d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225240365Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d03d55da-4587-4eea-8a9a-e52381826a03 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225676677Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c7d002dd-9552-4715-b7be-2078da811840 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.226165084Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=daf96e40-8252-45d3-a005-ea53669f5cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.033616408Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2a067e1-2c90-429c-b592-c0026a728c8d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.0344028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=faa7c5c2-57de-45d0-98b9-b1fc40b3897e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.034956632Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=86aed69e-89fa-4789-b7e0-66c21b53b655 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.035606867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e8b67148-2c8a-4d5b-8bc5-9c052262c589 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036159986Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56696416-3d0f-4c77-8dbb-77790563b13a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036707976Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ba267d45-19b4-448f-9f92-2993fe38692a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.037209312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=301781f2-4844-424c-a8ec-9528bb0007ad name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:49:34.138744   23277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:34.139160   23277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:34.140849   23277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:34.141204   23277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:34.142844   23277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:49:34 up  3:32,  0 user,  load average: 0.53, 0.30, 0.52
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:49:31 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:32 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1128.
	Dec 12 20:49:32 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:32 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:32 functional-261311 kubelet[23133]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:32 functional-261311 kubelet[23133]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:32 functional-261311 kubelet[23133]: E1212 20:49:32.216043   23133 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:32 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:32 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:32 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1129.
	Dec 12 20:49:32 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:32 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:33 functional-261311 kubelet[23171]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:33 functional-261311 kubelet[23171]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:33 functional-261311 kubelet[23171]: E1212 20:49:33.038167   23171 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:33 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:33 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:33 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1130.
	Dec 12 20:49:33 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:33 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:33 functional-261311 kubelet[23193]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:33 functional-261311 kubelet[23193]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:33 functional-261311 kubelet[23193]: E1212 20:49:33.769674   23193 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:33 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:33 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (360.472128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (3.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-261311 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-261311 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (55.904868ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-261311 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-261311 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-261311 describe po hello-node-connect: exit status 1 (56.572951ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-261311 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-261311 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-261311 logs -l app=hello-node-connect: exit status 1 (62.526757ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-261311 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-261311 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-261311 describe svc hello-node-connect: exit status 1 (58.836515ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-261311 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (311.131573ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                             ARGS                                                                             │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cache   │ functional-261311 cache reload                                                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ ssh     │ functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                                                      │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:3.1                                                                                                                             │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ cache   │ delete registry.k8s.io/pause:latest                                                                                                                          │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │ 12 Dec 25 20:35 UTC │
	│ kubectl │ functional-261311 kubectl -- --context functional-261311 get pods                                                                                            │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ start   │ -p functional-261311 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:35 UTC │                     │
	│ config  │ functional-261311 config unset cpus                                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ cp      │ functional-261311 cp testdata/cp-test.txt /home/docker/cp-test.txt                                                                                           │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ config  │ functional-261311 config get cpus                                                                                                                            │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │                     │
	│ config  │ functional-261311 config set cpus 2                                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ config  │ functional-261311 config get cpus                                                                                                                            │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ config  │ functional-261311 config unset cpus                                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ ssh     │ functional-261311 ssh -n functional-261311 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ config  │ functional-261311 config get cpus                                                                                                                            │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │                     │
	│ ssh     │ functional-261311 ssh echo hello                                                                                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ cp      │ functional-261311 cp functional-261311:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1560345641/001/cp-test.txt │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ ssh     │ functional-261311 ssh cat /etc/hostname                                                                                                                      │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ ssh     │ functional-261311 ssh -n functional-261311 sudo cat /home/docker/cp-test.txt                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ tunnel  │ functional-261311 tunnel --alsologtostderr                                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │                     │
	│ tunnel  │ functional-261311 tunnel --alsologtostderr                                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │                     │
	│ cp      │ functional-261311 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                    │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ tunnel  │ functional-261311 tunnel --alsologtostderr                                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │                     │
	│ ssh     │ functional-261311 ssh -n functional-261311 sudo cat /tmp/does/not/exist/cp-test.txt                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:47 UTC │ 12 Dec 25 20:47 UTC │
	│ addons  │ functional-261311 addons list                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ addons  │ functional-261311 addons list -o json                                                                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:35:15
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:35:15.460416  404800 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:35:15.460537  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.460541  404800 out.go:374] Setting ErrFile to fd 2...
	I1212 20:35:15.460545  404800 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:35:15.461281  404800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:35:15.461704  404800 out.go:368] Setting JSON to false
	I1212 20:35:15.462524  404800 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":11868,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:35:15.462588  404800 start.go:143] virtualization:  
	I1212 20:35:15.465993  404800 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:35:15.469163  404800 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:35:15.469272  404800 notify.go:221] Checking for updates...
	I1212 20:35:15.475214  404800 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:35:15.478288  404800 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:35:15.481030  404800 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:35:15.483916  404800 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:35:15.486846  404800 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:35:15.490383  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:15.490523  404800 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:35:15.521733  404800 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:35:15.521840  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.586834  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.575092276 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.586929  404800 docker.go:319] overlay module found
	I1212 20:35:15.590005  404800 out.go:179] * Using the docker driver based on existing profile
	I1212 20:35:15.592944  404800 start.go:309] selected driver: docker
	I1212 20:35:15.592962  404800 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.593077  404800 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:35:15.593201  404800 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:35:15.653530  404800 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:55 SystemTime:2025-12-12 20:35:15.644295166 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:35:15.653919  404800 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:35:15.653944  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:15.653992  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:15.654035  404800 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:15.657113  404800 out.go:179] * Starting "functional-261311" primary control-plane node in "functional-261311" cluster
	I1212 20:35:15.659873  404800 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:35:15.662874  404800 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:35:15.665759  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:15.665839  404800 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:35:15.665900  404800 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:35:15.665919  404800 cache.go:65] Caching tarball of preloaded images
	I1212 20:35:15.666041  404800 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:35:15.666050  404800 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 20:35:15.666202  404800 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/config.json ...
	I1212 20:35:15.685367  404800 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:35:15.685378  404800 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:35:15.685400  404800 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:35:15.685432  404800 start.go:360] acquireMachinesLock for functional-261311: {Name:mkbc4e6c743e47953e99b8ce65e244d33b483105 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:35:15.685502  404800 start.go:364] duration metric: took 54.475µs to acquireMachinesLock for "functional-261311"
	I1212 20:35:15.685521  404800 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:35:15.685526  404800 fix.go:54] fixHost starting: 
	I1212 20:35:15.685789  404800 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
	I1212 20:35:15.703273  404800 fix.go:112] recreateIfNeeded on functional-261311: state=Running err=<nil>
	W1212 20:35:15.703293  404800 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:35:15.712450  404800 out.go:252] * Updating the running docker "functional-261311" container ...
	I1212 20:35:15.712481  404800 machine.go:94] provisionDockerMachine start ...
	I1212 20:35:15.712578  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.736656  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.736977  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.736984  404800 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:35:15.891915  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:15.891929  404800 ubuntu.go:182] provisioning hostname "functional-261311"
	I1212 20:35:15.891999  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:15.910460  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:15.910779  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:15.910787  404800 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-261311 && echo "functional-261311" | sudo tee /etc/hostname
	I1212 20:35:16.077690  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-261311
	
	I1212 20:35:16.077778  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.097025  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.097341  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.097354  404800 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-261311' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-261311/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-261311' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:35:16.252758  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:35:16.252773  404800 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:35:16.252793  404800 ubuntu.go:190] setting up certificates
	I1212 20:35:16.252801  404800 provision.go:84] configureAuth start
	I1212 20:35:16.252918  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:16.270682  404800 provision.go:143] copyHostCerts
	I1212 20:35:16.270755  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:35:16.270763  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:35:16.270834  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:35:16.270926  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:35:16.270930  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:35:16.270953  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:35:16.271010  404800 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:35:16.271014  404800 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:35:16.271036  404800 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:35:16.271079  404800 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.functional-261311 san=[127.0.0.1 192.168.49.2 functional-261311 localhost minikube]
	I1212 20:35:16.466046  404800 provision.go:177] copyRemoteCerts
	I1212 20:35:16.466103  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:35:16.466141  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.490439  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:16.596331  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:35:16.614499  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1212 20:35:16.632168  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 20:35:16.649948  404800 provision.go:87] duration metric: took 397.124655ms to configureAuth
	I1212 20:35:16.649967  404800 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:35:16.650174  404800 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:35:16.650275  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:16.667262  404800 main.go:143] libmachine: Using SSH client type: native
	I1212 20:35:16.667562  404800 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33162 <nil> <nil>}
	I1212 20:35:16.667574  404800 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:35:17.020390  404800 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:35:17.020403  404800 machine.go:97] duration metric: took 1.307915361s to provisionDockerMachine
	I1212 20:35:17.020413  404800 start.go:293] postStartSetup for "functional-261311" (driver="docker")
	I1212 20:35:17.020431  404800 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:35:17.020498  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:35:17.020542  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.039179  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.144817  404800 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:35:17.148499  404800 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:35:17.148517  404800 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:35:17.148528  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:35:17.148587  404800 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:35:17.148671  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:35:17.148745  404800 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts -> hosts in /etc/test/nested/copy/364853
	I1212 20:35:17.148790  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/364853
	I1212 20:35:17.156874  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:17.175633  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts --> /etc/test/nested/copy/364853/hosts (40 bytes)
	I1212 20:35:17.193693  404800 start.go:296] duration metric: took 173.265259ms for postStartSetup
	I1212 20:35:17.193768  404800 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:35:17.193829  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.212738  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.326054  404800 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:35:17.331128  404800 fix.go:56] duration metric: took 1.64559363s for fixHost
	I1212 20:35:17.331145  404800 start.go:83] releasing machines lock for "functional-261311", held for 1.645635346s
	I1212 20:35:17.331211  404800 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-261311
	I1212 20:35:17.348942  404800 ssh_runner.go:195] Run: cat /version.json
	I1212 20:35:17.348993  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.349240  404800 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:35:17.349288  404800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
	I1212 20:35:17.377660  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.380423  404800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
	I1212 20:35:17.480436  404800 ssh_runner.go:195] Run: systemctl --version
	I1212 20:35:17.572826  404800 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:35:17.610243  404800 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:35:17.614893  404800 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:35:17.614954  404800 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:35:17.623289  404800 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:35:17.623303  404800 start.go:496] detecting cgroup driver to use...
	I1212 20:35:17.623333  404800 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:35:17.623377  404800 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:35:17.638845  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:35:17.652624  404800 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:35:17.652690  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:35:17.668971  404800 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:35:17.682562  404800 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:35:17.807109  404800 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:35:17.921667  404800 docker.go:234] disabling docker service ...
	I1212 20:35:17.921741  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:35:17.940321  404800 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:35:17.957092  404800 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:35:18.087741  404800 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:35:18.206163  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:35:18.219734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:35:18.233813  404800 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:35:18.233881  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.242826  404800 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:35:18.242900  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.252023  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.261290  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.270163  404800 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:35:18.278452  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.287612  404800 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.296129  404800 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:35:18.305360  404800 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:35:18.313008  404800 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:35:18.320507  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:18.433496  404800 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:35:18.624476  404800 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:35:18.624545  404800 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:35:18.628455  404800 start.go:564] Will wait 60s for crictl version
	I1212 20:35:18.628509  404800 ssh_runner.go:195] Run: which crictl
	I1212 20:35:18.631901  404800 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:35:18.657967  404800 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:35:18.658043  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.686054  404800 ssh_runner.go:195] Run: crio --version
	I1212 20:35:18.728907  404800 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 20:35:18.731836  404800 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:35:18.758101  404800 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:35:18.765430  404800 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1212 20:35:18.768359  404800 kubeadm.go:884] updating cluster {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:35:18.768498  404800 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:35:18.768569  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.809159  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.809172  404800 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:35:18.809226  404800 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:35:18.835786  404800 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:35:18.835798  404800 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:35:18.835804  404800 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 crio true true} ...
	I1212 20:35:18.835897  404800 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=functional-261311 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:35:18.835978  404800 ssh_runner.go:195] Run: crio config
	I1212 20:35:18.911975  404800 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1212 20:35:18.911996  404800 cni.go:84] Creating CNI manager for ""
	I1212 20:35:18.912005  404800 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:35:18.912021  404800 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:35:18.912048  404800 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-261311 NodeName:functional-261311 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfig
Opts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:35:18.912174  404800 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "functional-261311"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:35:18.912242  404800 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 20:35:18.919878  404800 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:35:18.919945  404800 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 20:35:18.927506  404800 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (374 bytes)
	I1212 20:35:18.940260  404800 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 20:35:18.953546  404800 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2071 bytes)
	I1212 20:35:18.966154  404800 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1212 20:35:18.969878  404800 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:35:19.088694  404800 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:35:19.456785  404800 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311 for IP: 192.168.49.2
	I1212 20:35:19.456797  404800 certs.go:195] generating shared ca certs ...
	I1212 20:35:19.456811  404800 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:35:19.457015  404800 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:35:19.457061  404800 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:35:19.457083  404800 certs.go:257] generating profile certs ...
	I1212 20:35:19.457188  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.key
	I1212 20:35:19.457266  404800 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key.8bc713d7
	I1212 20:35:19.457320  404800 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key
	I1212 20:35:19.457484  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:35:19.457522  404800 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:35:19.457530  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:35:19.457572  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:35:19.457613  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:35:19.457656  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:35:19.457720  404800 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:35:19.458537  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:35:19.481387  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:35:19.503914  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:35:19.527911  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:35:19.547817  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1212 20:35:19.567001  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 20:35:19.585411  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:35:19.603199  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:35:19.621415  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:35:19.639746  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:35:19.657747  404800 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:35:19.675414  404800 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:35:19.688797  404800 ssh_runner.go:195] Run: openssl version
	I1212 20:35:19.695324  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.703181  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:35:19.710800  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714682  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.714738  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:35:19.755943  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:35:19.764525  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.772260  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:35:19.780093  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783725  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.783778  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:35:19.825039  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:35:19.832411  404800 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.839917  404800 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:35:19.847683  404800 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851494  404800 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.851551  404800 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:35:19.892840  404800 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:35:19.900611  404800 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:35:19.904415  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:35:19.945816  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:35:19.987206  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:35:20.028949  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:35:20.071640  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:35:20.114011  404800 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:35:20.155956  404800 kubeadm.go:401] StartCluster: {Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disab
leOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:35:20.156040  404800 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:35:20.156106  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.185271  404800 cri.go:89] found id: ""
	I1212 20:35:20.185335  404800 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:35:20.193716  404800 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:35:20.193726  404800 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:35:20.193778  404800 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:35:20.201404  404800 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.201928  404800 kubeconfig.go:125] found "functional-261311" server: "https://192.168.49.2:8441"
	I1212 20:35:20.203285  404800 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:35:20.213068  404800 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 20:20:42.746943766 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 20:35:18.963900938 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1212 20:35:20.213088  404800 kubeadm.go:1161] stopping kube-system containers ...
	I1212 20:35:20.213099  404800 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 20:35:20.213154  404800 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:35:20.242899  404800 cri.go:89] found id: ""
	I1212 20:35:20.242960  404800 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 20:35:20.261588  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:35:20.270004  404800 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 12 20:24 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 12 20:24 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 12 20:24 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 12 20:24 /etc/kubernetes/scheduler.conf
	
	I1212 20:35:20.270062  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:35:20.278110  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:35:20.285789  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.285844  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:35:20.293376  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.301132  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.301185  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:35:20.309065  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:35:20.316914  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:35:20.316967  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:35:20.324673  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:35:20.332520  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:20.381164  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.740495  404800 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.359307117s)
	I1212 20:35:21.740554  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:21.936349  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.006437  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 20:35:22.060809  404800 api_server.go:52] waiting for apiserver process to appear ...
	I1212 20:35:22.060899  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:22.561081  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.062037  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:23.561673  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.061283  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:24.561690  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.061084  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:25.561740  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.061753  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:26.561615  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.061476  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:27.561193  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.061088  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:28.561754  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.061218  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:29.561124  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.061364  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:30.561503  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.061616  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:31.561042  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.061002  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:32.561635  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:33.561100  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.061640  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:34.562032  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.061030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:35.561966  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.061881  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:36.561895  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.061604  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:37.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.062060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:38.561065  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.061118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:39.561000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.061043  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:40.561911  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.061748  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:41.561627  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.061101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:42.561174  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.061190  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:43.561060  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.061057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:44.561587  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:45.561122  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.061055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:46.561141  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.061107  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:47.560994  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.062000  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:48.561057  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.061151  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:49.561089  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.061007  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:50.561745  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.061094  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:51.561413  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.061652  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:52.561706  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.061685  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:53.561118  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.061047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:54.561109  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.061626  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:55.561543  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.061374  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:56.561047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.062047  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:57.561053  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.061760  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:58.561015  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.061910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:35:59.561602  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.061050  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:00.565101  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.061738  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:01.561016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.061584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:02.561705  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.062021  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:03.561146  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.061266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:04.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.061786  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:05.561910  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.062016  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:06.561621  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.061104  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:07.561077  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.061034  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:08.561076  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.061095  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:09.561610  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.062030  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:10.561403  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.061217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:11.561772  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.061561  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:12.561252  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.061001  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:13.561813  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.061556  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:14.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.061061  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:15.561415  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.061155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:16.561701  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.061682  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:17.561217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.061108  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:18.561055  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.061653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:19.561105  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.061064  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:20.561836  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.061167  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:21.561650  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:22.061836  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:22.061921  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:22.088621  404800 cri.go:89] found id: ""
	I1212 20:36:22.088636  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.088643  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:22.088648  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:22.088710  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:22.115845  404800 cri.go:89] found id: ""
	I1212 20:36:22.115860  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.115867  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:22.115872  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:22.115934  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:22.145607  404800 cri.go:89] found id: ""
	I1212 20:36:22.145622  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.145629  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:22.145634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:22.145694  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:22.175762  404800 cri.go:89] found id: ""
	I1212 20:36:22.175782  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.175790  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:22.175795  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:22.175852  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:22.205262  404800 cri.go:89] found id: ""
	I1212 20:36:22.205277  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.205283  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:22.205288  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:22.205343  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:22.240968  404800 cri.go:89] found id: ""
	I1212 20:36:22.240981  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.240988  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:22.240993  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:22.241050  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:22.272662  404800 cri.go:89] found id: ""
	I1212 20:36:22.272676  404800 logs.go:282] 0 containers: []
	W1212 20:36:22.272683  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:22.272691  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:22.272700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:22.301824  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:22.301841  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:22.370470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:22.370488  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:22.385289  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:22.385306  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:22.449648  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:22.440970   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.441631   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443294   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.443822   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:22.445497   11024 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:22.449659  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:22.449670  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.019320  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:25.030277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:25.030345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:25.060950  404800 cri.go:89] found id: ""
	I1212 20:36:25.060975  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.060982  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:25.060988  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:25.061049  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:25.087641  404800 cri.go:89] found id: ""
	I1212 20:36:25.087663  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.087670  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:25.087675  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:25.087735  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:25.114870  404800 cri.go:89] found id: ""
	I1212 20:36:25.114885  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.114893  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:25.114899  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:25.114963  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:25.140642  404800 cri.go:89] found id: ""
	I1212 20:36:25.140664  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.140671  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:25.140677  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:25.140736  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:25.166644  404800 cri.go:89] found id: ""
	I1212 20:36:25.166658  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.166665  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:25.166671  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:25.166731  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:25.192547  404800 cri.go:89] found id: ""
	I1212 20:36:25.192561  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.192567  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:25.192572  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:25.192635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:25.231874  404800 cri.go:89] found id: ""
	I1212 20:36:25.231889  404800 logs.go:282] 0 containers: []
	W1212 20:36:25.231895  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:25.231903  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:25.231914  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:25.315537  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:25.315559  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:25.330635  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:25.330654  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:25.395220  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:25.386939   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.387844   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389637   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.389964   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:25.391476   11116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:25.395260  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:25.395272  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:25.467585  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:25.467605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:27.999765  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:28.012318  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:28.012406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:28.038452  404800 cri.go:89] found id: ""
	I1212 20:36:28.038467  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.038475  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:28.038481  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:28.038550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:28.065565  404800 cri.go:89] found id: ""
	I1212 20:36:28.065579  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.065586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:28.065591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:28.065652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:28.091553  404800 cri.go:89] found id: ""
	I1212 20:36:28.091574  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.091581  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:28.091587  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:28.091651  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:28.117664  404800 cri.go:89] found id: ""
	I1212 20:36:28.117677  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.117684  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:28.117689  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:28.117747  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:28.143314  404800 cri.go:89] found id: ""
	I1212 20:36:28.143328  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.143335  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:28.143339  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:28.143396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:28.170365  404800 cri.go:89] found id: ""
	I1212 20:36:28.170379  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.170386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:28.170391  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:28.170450  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:28.194993  404800 cri.go:89] found id: ""
	I1212 20:36:28.195013  404800 logs.go:282] 0 containers: []
	W1212 20:36:28.195019  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:28.195027  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:28.195037  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:28.264144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:28.264163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:28.294480  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:28.294497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:28.364064  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:28.364087  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:28.378788  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:28.378811  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:28.443238  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:28.435365   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.435947   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437460   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.437963   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:28.439466   11239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:30.944182  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:30.954580  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:30.954652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:30.981452  404800 cri.go:89] found id: ""
	I1212 20:36:30.981467  404800 logs.go:282] 0 containers: []
	W1212 20:36:30.981474  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:30.981479  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:30.981543  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:31.009852  404800 cri.go:89] found id: ""
	I1212 20:36:31.009868  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.009875  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:31.009881  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:31.009949  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:31.041648  404800 cri.go:89] found id: ""
	I1212 20:36:31.041664  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.041671  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:31.041676  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:31.041741  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:31.071159  404800 cri.go:89] found id: ""
	I1212 20:36:31.071194  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.071203  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:31.071208  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:31.071274  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:31.101318  404800 cri.go:89] found id: ""
	I1212 20:36:31.101333  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.101340  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:31.101345  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:31.101407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:31.128905  404800 cri.go:89] found id: ""
	I1212 20:36:31.128921  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.128937  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:31.128943  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:31.129019  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:31.156884  404800 cri.go:89] found id: ""
	I1212 20:36:31.156899  404800 logs.go:282] 0 containers: []
	W1212 20:36:31.156906  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:31.156914  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:31.156924  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:31.229169  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:31.229188  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:31.244638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:31.244655  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:31.316835  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:31.307348   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.308074   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.309792   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.310466   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:31.311410   11332 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:31.316848  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:31.316866  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:31.386236  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:31.386258  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:33.917579  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:33.927716  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:33.927782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:33.952915  404800 cri.go:89] found id: ""
	I1212 20:36:33.952929  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.952936  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:33.952941  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:33.952998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:33.986667  404800 cri.go:89] found id: ""
	I1212 20:36:33.986681  404800 logs.go:282] 0 containers: []
	W1212 20:36:33.986688  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:33.986693  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:33.986753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:34.017351  404800 cri.go:89] found id: ""
	I1212 20:36:34.017367  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.017374  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:34.017379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:34.017459  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:34.044495  404800 cri.go:89] found id: ""
	I1212 20:36:34.044509  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.044517  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:34.044522  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:34.044579  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:34.070939  404800 cri.go:89] found id: ""
	I1212 20:36:34.070953  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.070960  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:34.070964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:34.071022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:34.099384  404800 cri.go:89] found id: ""
	I1212 20:36:34.099398  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.099405  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:34.099411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:34.099469  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:34.125342  404800 cri.go:89] found id: ""
	I1212 20:36:34.125357  404800 logs.go:282] 0 containers: []
	W1212 20:36:34.125364  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:34.125372  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:34.125383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:34.195370  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:34.195391  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:34.212114  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:34.212130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:34.294767  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:34.286119   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.286818   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.288478   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.289037   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:34.290758   11440 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:34.294788  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:34.294798  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:34.365333  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:34.365354  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:36.899244  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:36.909418  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:36.909481  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:36.934188  404800 cri.go:89] found id: ""
	I1212 20:36:36.934202  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.934219  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:36.934224  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:36.934281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:36.959806  404800 cri.go:89] found id: ""
	I1212 20:36:36.959821  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.959828  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:36.959832  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:36.959898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:36.986148  404800 cri.go:89] found id: ""
	I1212 20:36:36.986162  404800 logs.go:282] 0 containers: []
	W1212 20:36:36.986169  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:36.986174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:36.986231  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:37.017876  404800 cri.go:89] found id: ""
	I1212 20:36:37.017892  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.017899  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:37.017905  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:37.017971  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:37.047901  404800 cri.go:89] found id: ""
	I1212 20:36:37.047915  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.047921  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:37.047926  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:37.047985  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:37.076531  404800 cri.go:89] found id: ""
	I1212 20:36:37.076546  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.076553  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:37.076558  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:37.076615  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:37.102846  404800 cri.go:89] found id: ""
	I1212 20:36:37.102870  404800 logs.go:282] 0 containers: []
	W1212 20:36:37.102877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:37.102885  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:37.102896  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:37.134007  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:37.134024  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:37.207327  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:37.207352  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:37.222638  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:37.222657  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:37.290385  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:37.281958   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.282679   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.283817   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.284511   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:37.286319   11558 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:37.290395  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:37.290406  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:39.860964  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:39.871500  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:39.871558  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:39.898740  404800 cri.go:89] found id: ""
	I1212 20:36:39.898755  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.898762  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:39.898767  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:39.898830  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:39.925154  404800 cri.go:89] found id: ""
	I1212 20:36:39.925168  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.925175  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:39.925180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:39.925239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:39.950208  404800 cri.go:89] found id: ""
	I1212 20:36:39.950223  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.950229  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:39.950234  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:39.950297  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:39.976836  404800 cri.go:89] found id: ""
	I1212 20:36:39.976851  404800 logs.go:282] 0 containers: []
	W1212 20:36:39.976857  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:39.976863  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:39.976936  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:40.009665  404800 cri.go:89] found id: ""
	I1212 20:36:40.009695  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.010153  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:40.010168  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:40.010262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:40.067797  404800 cri.go:89] found id: ""
	I1212 20:36:40.067813  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.067838  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:40.067844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:40.067922  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:40.103262  404800 cri.go:89] found id: ""
	I1212 20:36:40.103277  404800 logs.go:282] 0 containers: []
	W1212 20:36:40.103287  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:40.103295  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:40.103308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:40.119554  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:40.119573  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:40.195337  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:40.185349   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.186460   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188199   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.188873   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:40.190824   11645 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:40.195364  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:40.195376  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:40.270010  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:40.270029  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:40.299631  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:40.299652  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:42.866117  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:42.876408  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:42.876467  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:42.901308  404800 cri.go:89] found id: ""
	I1212 20:36:42.901321  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.901328  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:42.901333  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:42.901396  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:42.925954  404800 cri.go:89] found id: ""
	I1212 20:36:42.925968  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.925975  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:42.925980  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:42.926041  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:42.951209  404800 cri.go:89] found id: ""
	I1212 20:36:42.951224  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.951231  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:42.951236  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:42.951296  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:42.977995  404800 cri.go:89] found id: ""
	I1212 20:36:42.978010  404800 logs.go:282] 0 containers: []
	W1212 20:36:42.978017  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:42.978022  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:42.978082  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:43.004860  404800 cri.go:89] found id: ""
	I1212 20:36:43.004875  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.004892  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:43.004898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:43.004973  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:43.040400  404800 cri.go:89] found id: ""
	I1212 20:36:43.040414  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.040421  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:43.040427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:43.040485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:43.068090  404800 cri.go:89] found id: ""
	I1212 20:36:43.068104  404800 logs.go:282] 0 containers: []
	W1212 20:36:43.068122  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:43.068130  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:43.068144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:43.140175  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:43.140195  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:43.154957  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:43.154976  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:43.225443  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:43.216555   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.217274   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.218829   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.219142   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:43.220753   11754 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:43.225462  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:43.225473  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:43.307152  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:43.307175  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:45.837432  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:45.847721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:45.847783  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:45.874064  404800 cri.go:89] found id: ""
	I1212 20:36:45.874118  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.874125  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:45.874131  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:45.874197  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:45.902655  404800 cri.go:89] found id: ""
	I1212 20:36:45.902669  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.902676  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:45.902681  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:45.902739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:45.929017  404800 cri.go:89] found id: ""
	I1212 20:36:45.929031  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.929044  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:45.929050  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:45.929118  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:45.958749  404800 cri.go:89] found id: ""
	I1212 20:36:45.958763  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.958770  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:45.958776  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:45.958837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:45.989217  404800 cri.go:89] found id: ""
	I1212 20:36:45.989239  404800 logs.go:282] 0 containers: []
	W1212 20:36:45.989246  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:45.989252  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:45.989317  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:46.017594  404800 cri.go:89] found id: ""
	I1212 20:36:46.017609  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.017616  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:46.017621  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:46.017681  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:46.047594  404800 cri.go:89] found id: ""
	I1212 20:36:46.047619  404800 logs.go:282] 0 containers: []
	W1212 20:36:46.047628  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:46.047636  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:46.047647  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:46.113115  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:46.113137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:46.128309  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:46.128328  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:46.195035  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:46.186544   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.187172   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.188933   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.189538   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:46.191089   11857 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:46.195044  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:46.195054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:46.268896  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:46.268917  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:48.800382  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:48.810496  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:48.810556  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:48.835685  404800 cri.go:89] found id: ""
	I1212 20:36:48.835699  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.835706  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:48.835712  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:48.835772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:48.864872  404800 cri.go:89] found id: ""
	I1212 20:36:48.864892  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.864899  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:48.864904  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:48.864969  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:48.889491  404800 cri.go:89] found id: ""
	I1212 20:36:48.889505  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.889512  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:48.889517  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:48.889577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:48.914454  404800 cri.go:89] found id: ""
	I1212 20:36:48.914468  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.914474  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:48.914480  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:48.914533  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:48.938478  404800 cri.go:89] found id: ""
	I1212 20:36:48.938492  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.938499  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:48.938504  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:48.938570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:48.964129  404800 cri.go:89] found id: ""
	I1212 20:36:48.964143  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.964151  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:48.964156  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:48.964221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:48.989666  404800 cri.go:89] found id: ""
	I1212 20:36:48.989680  404800 logs.go:282] 0 containers: []
	W1212 20:36:48.989687  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:48.989695  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:48.989705  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:49.063089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:49.063110  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:49.095579  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:49.095596  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:49.163720  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:49.163740  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:49.178328  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:49.178344  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:49.260325  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:49.251791   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.252708   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.253936   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.254698   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:49.256413   11973 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:51.761045  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:51.771641  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:51.771702  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:51.797458  404800 cri.go:89] found id: ""
	I1212 20:36:51.797472  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.797479  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:51.797484  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:51.797541  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:51.823244  404800 cri.go:89] found id: ""
	I1212 20:36:51.823268  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.823274  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:51.823279  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:51.823346  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:51.848495  404800 cri.go:89] found id: ""
	I1212 20:36:51.848509  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.848516  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:51.848520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:51.848580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:51.873152  404800 cri.go:89] found id: ""
	I1212 20:36:51.873168  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.873175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:51.873180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:51.873238  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:51.898283  404800 cri.go:89] found id: ""
	I1212 20:36:51.898297  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.898305  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:51.898310  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:51.898370  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:51.924343  404800 cri.go:89] found id: ""
	I1212 20:36:51.924358  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.924386  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:51.924392  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:51.924455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:51.949330  404800 cri.go:89] found id: ""
	I1212 20:36:51.949345  404800 logs.go:282] 0 containers: []
	W1212 20:36:51.949352  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:51.949359  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:51.949371  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:52.016304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:52.016326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:52.032963  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:52.032980  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:52.109987  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:52.099831   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.100720   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.101466   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.103451   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:52.104261   12064 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:52.109999  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:52.110012  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:52.180144  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:52.180164  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:54.720069  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:54.730740  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:54.730803  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:54.758017  404800 cri.go:89] found id: ""
	I1212 20:36:54.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.758038  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:54.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:54.758105  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:54.790190  404800 cri.go:89] found id: ""
	I1212 20:36:54.790210  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.790217  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:54.790222  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:54.790281  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:54.819974  404800 cri.go:89] found id: ""
	I1212 20:36:54.819989  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.819996  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:54.820001  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:54.820065  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:54.847251  404800 cri.go:89] found id: ""
	I1212 20:36:54.847265  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.847272  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:54.847277  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:54.847342  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:54.873168  404800 cri.go:89] found id: ""
	I1212 20:36:54.873182  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.873190  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:54.873195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:54.873262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:54.898145  404800 cri.go:89] found id: ""
	I1212 20:36:54.898160  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.898167  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:54.898175  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:54.898237  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:54.924123  404800 cri.go:89] found id: ""
	I1212 20:36:54.924146  404800 logs.go:282] 0 containers: []
	W1212 20:36:54.924155  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:54.924163  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:54.924173  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:54.989756  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:54.989775  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:55.021117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:55.021137  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:55.090802  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:55.082767   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.083409   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.084984   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.085445   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:55.086924   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:55.090816  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:55.090828  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:55.164266  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:55.164287  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:36:57.696458  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:36:57.706599  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:36:57.706656  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:36:57.732396  404800 cri.go:89] found id: ""
	I1212 20:36:57.732410  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.732420  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:36:57.732425  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:36:57.732485  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:36:57.758017  404800 cri.go:89] found id: ""
	I1212 20:36:57.758032  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.758039  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:36:57.758044  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:36:57.758100  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:36:57.784957  404800 cri.go:89] found id: ""
	I1212 20:36:57.784971  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.784978  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:36:57.784983  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:36:57.785044  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:36:57.810973  404800 cri.go:89] found id: ""
	I1212 20:36:57.810986  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.810993  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:36:57.810999  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:36:57.811054  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:36:57.837384  404800 cri.go:89] found id: ""
	I1212 20:36:57.837398  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.837406  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:36:57.837411  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:36:57.837487  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:36:57.863576  404800 cri.go:89] found id: ""
	I1212 20:36:57.863598  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.863605  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:36:57.863610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:36:57.863676  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:36:57.889215  404800 cri.go:89] found id: ""
	I1212 20:36:57.889236  404800 logs.go:282] 0 containers: []
	W1212 20:36:57.889244  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:36:57.889252  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:36:57.889263  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:36:57.956054  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:36:57.956076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:36:57.970574  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:36:57.970590  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:36:58.038134  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:36:58.029330   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.029739   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.031379   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.032214   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:36:58.033970   12276 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:36:58.038144  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:36:58.038160  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:36:58.109516  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:36:58.109541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:00.640789  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:00.651136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:00.651196  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:00.678187  404800 cri.go:89] found id: ""
	I1212 20:37:00.678202  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.678209  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:00.678215  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:00.678275  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:00.703384  404800 cri.go:89] found id: ""
	I1212 20:37:00.703400  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.703407  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:00.703412  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:00.703474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:00.735999  404800 cri.go:89] found id: ""
	I1212 20:37:00.736013  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.736020  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:00.736025  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:00.736083  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:00.762232  404800 cri.go:89] found id: ""
	I1212 20:37:00.762246  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.762253  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:00.762258  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:00.762314  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:00.788575  404800 cri.go:89] found id: ""
	I1212 20:37:00.788589  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.788596  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:00.788601  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:00.788663  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:00.815050  404800 cri.go:89] found id: ""
	I1212 20:37:00.815065  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.815081  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:00.815087  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:00.815146  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:00.840166  404800 cri.go:89] found id: ""
	I1212 20:37:00.840180  404800 logs.go:282] 0 containers: []
	W1212 20:37:00.840196  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:00.840205  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:00.840216  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:00.905766  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:00.905787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:00.920612  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:00.920631  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:00.987903  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:00.979886   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.980290   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.981934   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.982374   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:00.983860   12381 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:00.987914  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:00.987926  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:01.058125  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:01.058146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.588584  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:03.599133  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:03.599202  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:03.629322  404800 cri.go:89] found id: ""
	I1212 20:37:03.629336  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.629343  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:03.629348  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:03.629410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:03.654415  404800 cri.go:89] found id: ""
	I1212 20:37:03.654429  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.654436  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:03.654443  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:03.654499  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:03.679922  404800 cri.go:89] found id: ""
	I1212 20:37:03.679937  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.679944  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:03.679950  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:03.680015  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:03.706619  404800 cri.go:89] found id: ""
	I1212 20:37:03.706634  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.706640  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:03.706646  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:03.706707  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:03.733101  404800 cri.go:89] found id: ""
	I1212 20:37:03.733116  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.733123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:03.733128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:03.733189  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:03.758431  404800 cri.go:89] found id: ""
	I1212 20:37:03.758445  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.758452  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:03.758457  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:03.758520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:03.789138  404800 cri.go:89] found id: ""
	I1212 20:37:03.789152  404800 logs.go:282] 0 containers: []
	W1212 20:37:03.789159  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:03.789166  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:03.789177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:03.852394  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:03.843826   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.844548   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846260   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.846901   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:03.848580   12481 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:03.852404  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:03.852415  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:03.921263  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:03.921283  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:03.950006  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:03.950022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:04.020715  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:04.020739  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.536553  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:06.547113  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:06.547176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:06.575862  404800 cri.go:89] found id: ""
	I1212 20:37:06.575876  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.575883  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:06.575888  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:06.575947  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:06.601781  404800 cri.go:89] found id: ""
	I1212 20:37:06.601796  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.601803  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:06.601808  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:06.601868  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:06.627486  404800 cri.go:89] found id: ""
	I1212 20:37:06.627500  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.627507  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:06.627520  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:06.627577  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:06.656432  404800 cri.go:89] found id: ""
	I1212 20:37:06.656446  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.656454  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:06.656465  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:06.656526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:06.681705  404800 cri.go:89] found id: ""
	I1212 20:37:06.681719  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.681726  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:06.681731  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:06.681794  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:06.707068  404800 cri.go:89] found id: ""
	I1212 20:37:06.707083  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.707090  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:06.707095  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:06.707157  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:06.734286  404800 cri.go:89] found id: ""
	I1212 20:37:06.734300  404800 logs.go:282] 0 containers: []
	W1212 20:37:06.734307  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:06.734314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:06.734324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:06.799595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:06.799616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:06.814521  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:06.814543  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:06.881453  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:06.872121   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.872841   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.874695   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.875330   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:06.876927   12594 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:06.881463  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:06.881474  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:06.950345  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:06.950365  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:09.488970  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:09.500875  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:09.500940  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:09.529418  404800 cri.go:89] found id: ""
	I1212 20:37:09.529433  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.529439  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:09.529445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:09.529505  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:09.559685  404800 cri.go:89] found id: ""
	I1212 20:37:09.559700  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.559707  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:09.559712  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:09.559772  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:09.587781  404800 cri.go:89] found id: ""
	I1212 20:37:09.587796  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.587802  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:09.587807  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:09.587869  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:09.613804  404800 cri.go:89] found id: ""
	I1212 20:37:09.613820  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.613826  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:09.613832  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:09.613903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:09.639550  404800 cri.go:89] found id: ""
	I1212 20:37:09.639566  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.639573  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:09.639578  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:09.639644  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:09.669938  404800 cri.go:89] found id: ""
	I1212 20:37:09.669953  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.669960  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:09.669965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:09.670025  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:09.696771  404800 cri.go:89] found id: ""
	I1212 20:37:09.696785  404800 logs.go:282] 0 containers: []
	W1212 20:37:09.696799  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:09.696807  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:09.696818  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:09.763319  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:09.763340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:09.778782  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:09.778799  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:09.846376  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:09.837510   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.838340   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.839144   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.840746   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:09.841106   12701 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:09.846385  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:09.846396  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:09.917476  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:09.917497  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.447817  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:12.457978  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:12.458042  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:12.491473  404800 cri.go:89] found id: ""
	I1212 20:37:12.491487  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.491495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:12.491500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:12.491559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:12.522865  404800 cri.go:89] found id: ""
	I1212 20:37:12.522881  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.522888  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:12.522892  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:12.522959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:12.548498  404800 cri.go:89] found id: ""
	I1212 20:37:12.548514  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.548521  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:12.548526  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:12.548592  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:12.579700  404800 cri.go:89] found id: ""
	I1212 20:37:12.579714  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.579721  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:12.579726  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:12.579791  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:12.606849  404800 cri.go:89] found id: ""
	I1212 20:37:12.606863  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.606870  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:12.606878  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:12.606942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:12.632352  404800 cri.go:89] found id: ""
	I1212 20:37:12.632386  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.632394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:12.632400  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:12.632464  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:12.657776  404800 cri.go:89] found id: ""
	I1212 20:37:12.657791  404800 logs.go:282] 0 containers: []
	W1212 20:37:12.657798  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:12.657805  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:12.657816  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:12.672067  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:12.672083  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:12.744080  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:12.736614   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.737064   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738565   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.738904   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:12.740331   12802 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:12.744093  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:12.744103  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:12.811395  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:12.811414  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:12.839843  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:12.839862  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.405601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:15.417051  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:15.417110  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:15.442503  404800 cri.go:89] found id: ""
	I1212 20:37:15.442517  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.442524  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:15.442530  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:15.442588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:15.483736  404800 cri.go:89] found id: ""
	I1212 20:37:15.483763  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.483770  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:15.483775  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:15.483843  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:15.515671  404800 cri.go:89] found id: ""
	I1212 20:37:15.515685  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.515692  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:15.515697  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:15.515764  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:15.548136  404800 cri.go:89] found id: ""
	I1212 20:37:15.548151  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.548158  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:15.548163  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:15.548221  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:15.576936  404800 cri.go:89] found id: ""
	I1212 20:37:15.576951  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.576958  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:15.576962  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:15.577022  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:15.603608  404800 cri.go:89] found id: ""
	I1212 20:37:15.603622  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.603629  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:15.603634  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:15.603689  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:15.638105  404800 cri.go:89] found id: ""
	I1212 20:37:15.638125  404800 logs.go:282] 0 containers: []
	W1212 20:37:15.638133  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:15.638140  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:15.638150  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:15.708493  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:15.708513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:15.723827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:15.723851  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:15.792302  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:15.784344   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.784799   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786487   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.786941   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:15.788392   12911 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:15.792314  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:15.792326  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:15.860772  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:15.860796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:18.397462  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:18.407317  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:18.407382  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:18.433353  404800 cri.go:89] found id: ""
	I1212 20:37:18.433368  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.433375  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:18.433379  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:18.433435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:18.465547  404800 cri.go:89] found id: ""
	I1212 20:37:18.465561  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.465568  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:18.465572  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:18.465629  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:18.498811  404800 cri.go:89] found id: ""
	I1212 20:37:18.498825  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.498832  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:18.498837  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:18.498894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:18.525729  404800 cri.go:89] found id: ""
	I1212 20:37:18.525745  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.525752  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:18.525758  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:18.525820  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:18.555807  404800 cri.go:89] found id: ""
	I1212 20:37:18.555822  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.555829  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:18.555834  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:18.555890  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:18.586968  404800 cri.go:89] found id: ""
	I1212 20:37:18.586982  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.586989  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:18.586994  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:18.587048  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:18.613654  404800 cri.go:89] found id: ""
	I1212 20:37:18.613668  404800 logs.go:282] 0 containers: []
	W1212 20:37:18.613675  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:18.613683  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:18.613694  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:18.685435  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:18.685464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:18.701543  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:18.701560  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:18.771148  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:18.762368   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.763025   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765038   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.765857   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:18.767427   13013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:18.771159  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:18.771169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:18.840302  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:18.840324  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.370649  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:21.380730  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:21.380785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:21.407262  404800 cri.go:89] found id: ""
	I1212 20:37:21.407277  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.407285  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:21.407290  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:21.407353  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:21.431725  404800 cri.go:89] found id: ""
	I1212 20:37:21.431741  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.431748  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:21.431753  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:21.431808  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:21.462830  404800 cri.go:89] found id: ""
	I1212 20:37:21.462844  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.462851  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:21.462856  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:21.462914  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:21.490038  404800 cri.go:89] found id: ""
	I1212 20:37:21.490053  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.490060  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:21.490066  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:21.490123  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:21.522135  404800 cri.go:89] found id: ""
	I1212 20:37:21.522152  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.522165  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:21.522170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:21.522243  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:21.550272  404800 cri.go:89] found id: ""
	I1212 20:37:21.550286  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.550293  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:21.550298  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:21.550352  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:21.575855  404800 cri.go:89] found id: ""
	I1212 20:37:21.575868  404800 logs.go:282] 0 containers: []
	W1212 20:37:21.575875  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:21.575882  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:21.575892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:21.643213  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:21.643234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:21.676057  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:21.676076  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:21.746870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:21.746890  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:21.762368  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:21.762383  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:21.829472  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:21.821498   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.822053   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.823553   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.824031   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:21.825114   13130 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.331150  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:24.341451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:24.341509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:24.365339  404800 cri.go:89] found id: ""
	I1212 20:37:24.365354  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.365362  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:24.365367  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:24.365430  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:24.392822  404800 cri.go:89] found id: ""
	I1212 20:37:24.392837  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.392844  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:24.392849  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:24.392941  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:24.419333  404800 cri.go:89] found id: ""
	I1212 20:37:24.419347  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.419354  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:24.419365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:24.419422  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:24.444927  404800 cri.go:89] found id: ""
	I1212 20:37:24.444940  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.444947  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:24.444952  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:24.445014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:24.479382  404800 cri.go:89] found id: ""
	I1212 20:37:24.479411  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.479422  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:24.479427  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:24.479496  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:24.519373  404800 cri.go:89] found id: ""
	I1212 20:37:24.519387  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.519394  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:24.519399  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:24.519458  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:24.546714  404800 cri.go:89] found id: ""
	I1212 20:37:24.546729  404800 logs.go:282] 0 containers: []
	W1212 20:37:24.546736  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:24.546744  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:24.546755  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:24.612546  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:24.612568  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:24.627419  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:24.627435  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:24.695735  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:24.686719   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.687385   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689276   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.689753   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:24.691296   13222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:24.695745  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:24.695757  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:24.764903  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:24.764929  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:27.295998  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:27.306158  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:27.306222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:27.331510  404800 cri.go:89] found id: ""
	I1212 20:37:27.331524  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.331532  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:27.331549  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:27.331608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:27.357120  404800 cri.go:89] found id: ""
	I1212 20:37:27.357134  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.357141  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:27.357146  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:27.357227  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:27.383390  404800 cri.go:89] found id: ""
	I1212 20:37:27.383404  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.383411  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:27.383416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:27.383471  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:27.408672  404800 cri.go:89] found id: ""
	I1212 20:37:27.408687  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.408695  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:27.408699  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:27.408758  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:27.434453  404800 cri.go:89] found id: ""
	I1212 20:37:27.434467  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.434478  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:27.434483  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:27.434542  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:27.467590  404800 cri.go:89] found id: ""
	I1212 20:37:27.467603  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.467610  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:27.467615  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:27.467672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:27.501872  404800 cri.go:89] found id: ""
	I1212 20:37:27.501886  404800 logs.go:282] 0 containers: []
	W1212 20:37:27.501893  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:27.501900  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:27.501912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:27.574950  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:27.574971  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:27.590147  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:27.590163  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:27.659572  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:27.651234   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.652048   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.653725   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.654359   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:27.655385   13326 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:27.659583  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:27.659594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:27.728089  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:27.728111  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.260552  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:30.272906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:30.272984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:30.302879  404800 cri.go:89] found id: ""
	I1212 20:37:30.302903  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.302911  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:30.302916  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:30.302993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:30.332792  404800 cri.go:89] found id: ""
	I1212 20:37:30.332807  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.332814  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:30.332819  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:30.332877  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:30.359283  404800 cri.go:89] found id: ""
	I1212 20:37:30.359298  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.359306  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:30.359311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:30.359369  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:30.385609  404800 cri.go:89] found id: ""
	I1212 20:37:30.385624  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.385643  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:30.385649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:30.385709  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:30.410328  404800 cri.go:89] found id: ""
	I1212 20:37:30.410343  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.410358  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:30.410362  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:30.410423  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:30.435005  404800 cri.go:89] found id: ""
	I1212 20:37:30.435019  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.435026  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:30.435031  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:30.435089  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:30.474088  404800 cri.go:89] found id: ""
	I1212 20:37:30.474102  404800 logs.go:282] 0 containers: []
	W1212 20:37:30.474109  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:30.474116  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:30.474127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:30.508894  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:30.508918  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:30.583876  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:30.583895  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:30.599205  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:30.599229  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:30.667713  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:30.659122   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.659662   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661283   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.661849   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:30.663383   13444 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:30.667723  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:30.667749  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.236428  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:33.246549  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:33.246607  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:33.272236  404800 cri.go:89] found id: ""
	I1212 20:37:33.272250  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.272257  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:33.272262  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:33.272324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:33.297982  404800 cri.go:89] found id: ""
	I1212 20:37:33.297997  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.298004  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:33.298009  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:33.298068  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:33.324170  404800 cri.go:89] found id: ""
	I1212 20:37:33.324183  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.324190  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:33.324195  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:33.324252  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:33.350869  404800 cri.go:89] found id: ""
	I1212 20:37:33.350883  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.350890  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:33.350895  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:33.350950  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:33.376336  404800 cri.go:89] found id: ""
	I1212 20:37:33.376352  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.376360  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:33.376384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:33.376446  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:33.402358  404800 cri.go:89] found id: ""
	I1212 20:37:33.402371  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.402378  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:33.402384  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:33.402444  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:33.428067  404800 cri.go:89] found id: ""
	I1212 20:37:33.428081  404800 logs.go:282] 0 containers: []
	W1212 20:37:33.428088  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:33.428104  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:33.428114  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:33.498721  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:33.498744  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:33.532343  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:33.532362  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:33.601583  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:33.601603  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:33.616929  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:33.616947  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:33.680299  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:33.671666   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.672531   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674007   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.674498   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:33.676176   13551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.180540  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:36.191300  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:36.191360  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:36.219483  404800 cri.go:89] found id: ""
	I1212 20:37:36.219498  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.219505  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:36.219511  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:36.219569  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:36.246240  404800 cri.go:89] found id: ""
	I1212 20:37:36.246255  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.246262  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:36.246267  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:36.246326  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:36.272949  404800 cri.go:89] found id: ""
	I1212 20:37:36.272962  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.272969  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:36.272975  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:36.273038  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:36.298716  404800 cri.go:89] found id: ""
	I1212 20:37:36.298731  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.298738  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:36.298743  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:36.298798  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:36.325228  404800 cri.go:89] found id: ""
	I1212 20:37:36.325242  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.325249  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:36.325254  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:36.325312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:36.350322  404800 cri.go:89] found id: ""
	I1212 20:37:36.350337  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.350344  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:36.350350  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:36.350406  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:36.380083  404800 cri.go:89] found id: ""
	I1212 20:37:36.380097  404800 logs.go:282] 0 containers: []
	W1212 20:37:36.380104  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:36.380117  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:36.380128  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:36.442887  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:36.434327   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.435078   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.436885   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.437411   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:36.438936   13634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:36.442899  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:36.442910  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:36.514571  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:36.514592  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:36.549020  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:36.549036  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:36.615002  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:36.615023  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.129960  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:39.139842  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:39.139903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:39.164988  404800 cri.go:89] found id: ""
	I1212 20:37:39.165003  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.165010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:39.165014  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:39.165072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:39.195151  404800 cri.go:89] found id: ""
	I1212 20:37:39.195166  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.195172  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:39.195177  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:39.195235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:39.223301  404800 cri.go:89] found id: ""
	I1212 20:37:39.223315  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.223322  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:39.223327  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:39.223384  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:39.248078  404800 cri.go:89] found id: ""
	I1212 20:37:39.248093  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.248100  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:39.248105  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:39.248162  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:39.272363  404800 cri.go:89] found id: ""
	I1212 20:37:39.272403  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.272411  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:39.272415  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:39.272474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:39.297353  404800 cri.go:89] found id: ""
	I1212 20:37:39.297367  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.297374  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:39.297379  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:39.297437  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:39.322842  404800 cri.go:89] found id: ""
	I1212 20:37:39.322855  404800 logs.go:282] 0 containers: []
	W1212 20:37:39.322863  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:39.322870  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:39.322881  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:39.337445  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:39.337460  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:39.398684  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:39.390797   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.391338   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.392503   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.393095   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:39.394860   13742 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:39.398694  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:39.398704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:39.472608  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:39.472628  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:39.511488  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:39.517700  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.092404  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:42.104757  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:42.104826  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:42.137172  404800 cri.go:89] found id: ""
	I1212 20:37:42.137189  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.137198  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:42.137204  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:42.137277  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:42.168320  404800 cri.go:89] found id: ""
	I1212 20:37:42.168336  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.168344  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:42.168349  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:42.168455  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:42.202618  404800 cri.go:89] found id: ""
	I1212 20:37:42.202633  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.202641  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:42.202647  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:42.202714  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:42.232011  404800 cri.go:89] found id: ""
	I1212 20:37:42.232026  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.232034  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:42.232039  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:42.232101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:42.260345  404800 cri.go:89] found id: ""
	I1212 20:37:42.260360  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.260398  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:42.260403  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:42.260465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:42.286857  404800 cri.go:89] found id: ""
	I1212 20:37:42.286882  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.286890  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:42.286898  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:42.286968  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:42.314846  404800 cri.go:89] found id: ""
	I1212 20:37:42.314870  404800 logs.go:282] 0 containers: []
	W1212 20:37:42.314877  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:42.314885  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:42.314898  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:42.382203  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:42.382223  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:42.397537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:42.397554  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:42.463930  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:42.455367   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.456320   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458022   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.458334   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:42.459806   13852 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:42.463940  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:42.463951  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:42.539788  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:42.539809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:45.073125  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:45.091416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:45.091491  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:45.126675  404800 cri.go:89] found id: ""
	I1212 20:37:45.126699  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.126707  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:45.126714  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:45.126789  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:45.167457  404800 cri.go:89] found id: ""
	I1212 20:37:45.167475  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.167483  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:45.167489  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:45.167559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:45.226232  404800 cri.go:89] found id: ""
	I1212 20:37:45.226264  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.226292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:45.226299  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:45.226372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:45.273410  404800 cri.go:89] found id: ""
	I1212 20:37:45.273427  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.273435  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:45.273441  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:45.273513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:45.313155  404800 cri.go:89] found id: ""
	I1212 20:37:45.313171  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.313178  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:45.313183  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:45.313253  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:45.345614  404800 cri.go:89] found id: ""
	I1212 20:37:45.345640  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.345669  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:45.345688  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:45.345851  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:45.375592  404800 cri.go:89] found id: ""
	I1212 20:37:45.375606  404800 logs.go:282] 0 containers: []
	W1212 20:37:45.375614  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:45.375622  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:45.375633  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:45.446441  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:45.446461  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:45.463226  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:45.463243  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:45.540934  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:45.533118   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.533590   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535134   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.535468   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:45.536952   13961 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:45.540944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:45.540955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:45.610027  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:45.610051  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:48.142953  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:48.153422  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:48.153489  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:48.182170  404800 cri.go:89] found id: ""
	I1212 20:37:48.182185  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.182192  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:48.182197  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:48.182255  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:48.207474  404800 cri.go:89] found id: ""
	I1212 20:37:48.207498  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.207506  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:48.207511  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:48.207588  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:48.232357  404800 cri.go:89] found id: ""
	I1212 20:37:48.232391  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.232399  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:48.232404  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:48.232472  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:48.257989  404800 cri.go:89] found id: ""
	I1212 20:37:48.258016  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.258024  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:48.258029  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:48.258095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:48.282918  404800 cri.go:89] found id: ""
	I1212 20:37:48.282932  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.282940  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:48.282945  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:48.283008  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:48.309285  404800 cri.go:89] found id: ""
	I1212 20:37:48.309299  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.309306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:48.309311  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:48.309367  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:48.335545  404800 cri.go:89] found id: ""
	I1212 20:37:48.335559  404800 logs.go:282] 0 containers: []
	W1212 20:37:48.335566  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:48.335573  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:48.335586  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:48.401770  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:48.401789  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:48.416320  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:48.416336  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:48.501926  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:48.486330   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.487051   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.492679   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.493283   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:48.495892   14063 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:48.501944  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:48.501955  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:48.576534  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:48.576555  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:51.105155  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:51.115964  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:51.116028  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:51.145401  404800 cri.go:89] found id: ""
	I1212 20:37:51.145416  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.145433  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:51.145445  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:51.145517  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:51.172664  404800 cri.go:89] found id: ""
	I1212 20:37:51.172679  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.172685  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:51.172690  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:51.172753  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:51.198093  404800 cri.go:89] found id: ""
	I1212 20:37:51.198108  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.198115  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:51.198120  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:51.198179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:51.223420  404800 cri.go:89] found id: ""
	I1212 20:37:51.223433  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.223449  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:51.223454  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:51.223510  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:51.253134  404800 cri.go:89] found id: ""
	I1212 20:37:51.253157  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.253164  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:51.253170  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:51.253236  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:51.278738  404800 cri.go:89] found id: ""
	I1212 20:37:51.278753  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.278761  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:51.278766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:51.278821  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:51.304296  404800 cri.go:89] found id: ""
	I1212 20:37:51.304311  404800 logs.go:282] 0 containers: []
	W1212 20:37:51.304318  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:51.304325  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:51.304346  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:51.370289  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:51.370308  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:51.385101  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:51.385116  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:51.449107  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:51.441267   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.441910   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443391   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.443793   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:51.445251   14166 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:51.449117  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:51.449127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:51.519024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:51.519047  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:54.054216  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:54.064710  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:54.064769  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:54.091620  404800 cri.go:89] found id: ""
	I1212 20:37:54.091634  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.091641  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:54.091646  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:54.091701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:54.122000  404800 cri.go:89] found id: ""
	I1212 20:37:54.122013  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.122020  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:54.122025  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:54.122081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:54.151439  404800 cri.go:89] found id: ""
	I1212 20:37:54.151454  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.151461  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:54.151466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:54.151520  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:54.180154  404800 cri.go:89] found id: ""
	I1212 20:37:54.180168  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.180175  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:54.180180  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:54.180235  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:54.206927  404800 cri.go:89] found id: ""
	I1212 20:37:54.206947  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.206954  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:54.206959  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:54.207014  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:54.231274  404800 cri.go:89] found id: ""
	I1212 20:37:54.231288  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.231306  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:54.231312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:54.231366  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:54.259379  404800 cri.go:89] found id: ""
	I1212 20:37:54.259395  404800 logs.go:282] 0 containers: []
	W1212 20:37:54.259402  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:54.259410  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:54.259420  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:54.325217  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:54.325237  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:54.339913  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:54.339930  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:54.403764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:54.395245   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.396349   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.397140   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398216   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:54.398891   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:54.403774  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:54.403786  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:54.474019  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:54.474039  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.003568  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:57.016502  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:57.016560  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:37:57.042988  404800 cri.go:89] found id: ""
	I1212 20:37:57.043003  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.043010  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:37:57.043015  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:37:57.043072  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:37:57.071640  404800 cri.go:89] found id: ""
	I1212 20:37:57.071654  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.071661  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:37:57.071666  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:37:57.071737  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:37:57.098101  404800 cri.go:89] found id: ""
	I1212 20:37:57.098115  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.098123  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:37:57.098128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:37:57.098185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:37:57.128276  404800 cri.go:89] found id: ""
	I1212 20:37:57.128300  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.128307  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:37:57.128312  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:37:57.128432  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:37:57.158908  404800 cri.go:89] found id: ""
	I1212 20:37:57.158922  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.158930  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:37:57.158939  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:37:57.159004  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:37:57.186146  404800 cri.go:89] found id: ""
	I1212 20:37:57.186161  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.186169  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:37:57.186174  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:37:57.186233  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:37:57.210969  404800 cri.go:89] found id: ""
	I1212 20:37:57.210984  404800 logs.go:282] 0 containers: []
	W1212 20:37:57.210991  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:37:57.210999  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:37:57.211017  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:37:57.225391  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:37:57.225407  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:37:57.289597  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:37:57.280576   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.281422   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.283487   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.284167   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:37:57.285566   14375 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:37:57.289607  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:37:57.289617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:37:57.362750  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:37:57.362771  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:37:57.396453  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:37:57.396470  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:37:59.967653  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:37:59.977921  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:37:59.977984  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:00.032267  404800 cri.go:89] found id: ""
	I1212 20:38:00.032297  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.032306  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:00.032312  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:00.032410  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:00.203733  404800 cri.go:89] found id: ""
	I1212 20:38:00.203752  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.203760  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:00.203766  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:00.203831  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:00.252579  404800 cri.go:89] found id: ""
	I1212 20:38:00.252596  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.252604  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:00.252610  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:00.252678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:00.301983  404800 cri.go:89] found id: ""
	I1212 20:38:00.302000  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.302009  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:00.302014  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:00.302081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:00.336785  404800 cri.go:89] found id: ""
	I1212 20:38:00.336813  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.336821  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:00.336827  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:00.336905  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:00.369703  404800 cri.go:89] found id: ""
	I1212 20:38:00.369720  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.369728  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:00.369749  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:00.369837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:00.404624  404800 cri.go:89] found id: ""
	I1212 20:38:00.404641  404800 logs.go:282] 0 containers: []
	W1212 20:38:00.404649  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:00.404657  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:00.404669  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:00.473595  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:00.473616  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:00.493555  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:00.493572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:00.568400  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:00.559640   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.560467   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562140   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.562808   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:00.564591   14490 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:00.568411  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:00.568425  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:00.641391  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:00.641416  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:03.171500  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:03.182094  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:03.182153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:03.207380  404800 cri.go:89] found id: ""
	I1212 20:38:03.207395  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.207402  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:03.207407  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:03.207465  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:03.232766  404800 cri.go:89] found id: ""
	I1212 20:38:03.232781  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.232788  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:03.232793  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:03.232856  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:03.263589  404800 cri.go:89] found id: ""
	I1212 20:38:03.263604  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.263611  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:03.263620  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:03.263678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:03.289719  404800 cri.go:89] found id: ""
	I1212 20:38:03.289734  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.289741  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:03.289755  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:03.289815  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:03.316755  404800 cri.go:89] found id: ""
	I1212 20:38:03.316770  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.316778  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:03.316783  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:03.316845  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:03.344424  404800 cri.go:89] found id: ""
	I1212 20:38:03.344438  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.344445  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:03.344451  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:03.344508  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:03.371242  404800 cri.go:89] found id: ""
	I1212 20:38:03.371257  404800 logs.go:282] 0 containers: []
	W1212 20:38:03.371265  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:03.371273  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:03.371284  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:03.439155  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:03.439177  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:03.456896  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:03.456912  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:03.536136  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:03.527316   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.527920   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.529686   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.530397   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:03.532142   14595 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:03.536146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:03.536159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:03.610647  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:03.610666  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:06.146575  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:06.157383  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:06.157441  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:06.183306  404800 cri.go:89] found id: ""
	I1212 20:38:06.183321  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.183329  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:06.183334  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:06.183393  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:06.210325  404800 cri.go:89] found id: ""
	I1212 20:38:06.210340  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.210348  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:06.210353  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:06.210411  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:06.235611  404800 cri.go:89] found id: ""
	I1212 20:38:06.235625  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.235632  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:06.235638  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:06.235699  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:06.261846  404800 cri.go:89] found id: ""
	I1212 20:38:06.261860  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.261867  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:06.261872  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:06.261938  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:06.290103  404800 cri.go:89] found id: ""
	I1212 20:38:06.290116  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.290123  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:06.290128  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:06.290185  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:06.316022  404800 cri.go:89] found id: ""
	I1212 20:38:06.316037  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.316044  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:06.316049  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:06.316107  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:06.342973  404800 cri.go:89] found id: ""
	I1212 20:38:06.342988  404800 logs.go:282] 0 containers: []
	W1212 20:38:06.342996  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:06.343004  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:06.343015  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:06.413249  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:06.413270  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:06.428467  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:06.428492  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:06.521492  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:06.507208   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.508013   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.511867   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.515565   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:06.517219   14693 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:06.521503  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:06.521513  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:06.591077  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:06.591100  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.125976  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:09.136849  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:09.136908  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:09.163513  404800 cri.go:89] found id: ""
	I1212 20:38:09.163528  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.163535  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:09.163541  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:09.163603  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:09.194011  404800 cri.go:89] found id: ""
	I1212 20:38:09.194026  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.194033  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:09.194038  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:09.194098  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:09.223187  404800 cri.go:89] found id: ""
	I1212 20:38:09.223201  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.223214  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:09.223219  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:09.223278  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:09.253410  404800 cri.go:89] found id: ""
	I1212 20:38:09.253424  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.253431  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:09.253436  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:09.253509  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:09.278330  404800 cri.go:89] found id: ""
	I1212 20:38:09.278344  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.278351  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:09.278356  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:09.278416  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:09.307840  404800 cri.go:89] found id: ""
	I1212 20:38:09.307854  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.307861  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:09.307866  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:09.307924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:09.335632  404800 cri.go:89] found id: ""
	I1212 20:38:09.335646  404800 logs.go:282] 0 containers: []
	W1212 20:38:09.335653  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:09.335660  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:09.335671  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:09.406024  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:09.406045  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:09.434314  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:09.434331  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:09.515858  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:09.515880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:09.532868  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:09.532885  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:09.599150  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:09.591061   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.591515   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593132   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.593474   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:09.595021   14818 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.099436  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:12.110285  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:12.110345  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:12.135810  404800 cri.go:89] found id: ""
	I1212 20:38:12.135825  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.135832  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:12.135837  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:12.135897  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:12.160429  404800 cri.go:89] found id: ""
	I1212 20:38:12.160444  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.160451  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:12.160456  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:12.160511  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:12.187065  404800 cri.go:89] found id: ""
	I1212 20:38:12.187080  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.187087  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:12.187092  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:12.187154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:12.212658  404800 cri.go:89] found id: ""
	I1212 20:38:12.212673  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.212681  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:12.212686  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:12.212743  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:12.238821  404800 cri.go:89] found id: ""
	I1212 20:38:12.238836  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.238843  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:12.238848  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:12.238909  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:12.265300  404800 cri.go:89] found id: ""
	I1212 20:38:12.265315  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.265322  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:12.265332  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:12.265392  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:12.292396  404800 cri.go:89] found id: ""
	I1212 20:38:12.292410  404800 logs.go:282] 0 containers: []
	W1212 20:38:12.292418  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:12.292435  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:12.292445  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:12.358716  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:12.358736  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:12.374039  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:12.374056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:12.438679  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:12.429880   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.430412   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432221   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.432895   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:12.434800   14904 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:12.438690  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:12.438701  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:12.519199  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:12.519218  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:15.058664  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:15.078525  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:15.078590  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:15.105060  404800 cri.go:89] found id: ""
	I1212 20:38:15.105075  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.105082  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:15.105088  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:15.105153  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:15.133041  404800 cri.go:89] found id: ""
	I1212 20:38:15.133056  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.133063  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:15.133068  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:15.133133  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:15.160326  404800 cri.go:89] found id: ""
	I1212 20:38:15.160340  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.160347  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:15.160353  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:15.160435  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:15.187814  404800 cri.go:89] found id: ""
	I1212 20:38:15.187828  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.187835  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:15.187840  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:15.187900  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:15.227819  404800 cri.go:89] found id: ""
	I1212 20:38:15.227833  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.227839  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:15.227844  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:15.227901  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:15.255383  404800 cri.go:89] found id: ""
	I1212 20:38:15.255398  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.255404  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:15.255410  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:15.255468  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:15.280977  404800 cri.go:89] found id: ""
	I1212 20:38:15.280991  404800 logs.go:282] 0 containers: []
	W1212 20:38:15.280997  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:15.281005  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:15.281022  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:15.347810  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:15.347832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:15.362524  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:15.362541  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:15.427106  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:15.418336   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.419038   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.420787   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.421428   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:15.423218   15009 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:15.427116  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:15.427127  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:15.497224  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:15.497244  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:18.029289  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:18.044111  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:18.044210  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:18.071723  404800 cri.go:89] found id: ""
	I1212 20:38:18.071737  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.071745  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:18.071750  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:18.071810  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:18.099105  404800 cri.go:89] found id: ""
	I1212 20:38:18.099119  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.099126  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:18.099131  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:18.099187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:18.123656  404800 cri.go:89] found id: ""
	I1212 20:38:18.123670  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.123677  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:18.123682  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:18.123739  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:18.150020  404800 cri.go:89] found id: ""
	I1212 20:38:18.150033  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.150040  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:18.150045  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:18.150101  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:18.174527  404800 cri.go:89] found id: ""
	I1212 20:38:18.174541  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.174548  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:18.174552  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:18.174608  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:18.198686  404800 cri.go:89] found id: ""
	I1212 20:38:18.198701  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.198716  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:18.198722  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:18.198779  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:18.223482  404800 cri.go:89] found id: ""
	I1212 20:38:18.223496  404800 logs.go:282] 0 containers: []
	W1212 20:38:18.223512  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:18.223521  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:18.223531  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:18.289154  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:18.289176  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:18.303954  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:18.303970  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:18.371467  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:18.362642   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.363507   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365091   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.365692   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:18.367280   15114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:18.371477  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:18.371493  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:18.440117  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:18.440138  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:20.983282  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:20.993766  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:20.993829  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:21.020992  404800 cri.go:89] found id: ""
	I1212 20:38:21.021006  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.021014  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:21.021019  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:21.021081  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:21.047844  404800 cri.go:89] found id: ""
	I1212 20:38:21.047857  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.047865  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:21.047869  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:21.047930  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:21.073011  404800 cri.go:89] found id: ""
	I1212 20:38:21.073025  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.073033  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:21.073038  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:21.073095  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:21.098802  404800 cri.go:89] found id: ""
	I1212 20:38:21.098816  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.098823  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:21.098829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:21.098884  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:21.127579  404800 cri.go:89] found id: ""
	I1212 20:38:21.127594  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.127601  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:21.127606  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:21.127672  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:21.154921  404800 cri.go:89] found id: ""
	I1212 20:38:21.154935  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.154942  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:21.154947  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:21.155001  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:21.181275  404800 cri.go:89] found id: ""
	I1212 20:38:21.181290  404800 logs.go:282] 0 containers: []
	W1212 20:38:21.181297  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:21.181304  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:21.181316  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:21.197100  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:21.197118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:21.263963  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:21.255290   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.255727   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.257359   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.258725   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:21.259518   15221 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:21.263974  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:21.263991  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:21.335974  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:21.335994  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:21.364201  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:21.364220  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:23.937090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:23.947413  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:23.947474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:23.973243  404800 cri.go:89] found id: ""
	I1212 20:38:23.973258  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.973265  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:23.973270  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:23.973324  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:23.999530  404800 cri.go:89] found id: ""
	I1212 20:38:23.999545  404800 logs.go:282] 0 containers: []
	W1212 20:38:23.999552  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:23.999557  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:23.999616  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:24.030165  404800 cri.go:89] found id: ""
	I1212 20:38:24.030180  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.030187  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:24.030193  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:24.030254  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:24.059776  404800 cri.go:89] found id: ""
	I1212 20:38:24.059792  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.059799  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:24.059804  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:24.059882  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:24.086292  404800 cri.go:89] found id: ""
	I1212 20:38:24.086306  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.086330  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:24.086338  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:24.086427  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:24.112150  404800 cri.go:89] found id: ""
	I1212 20:38:24.112164  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.112180  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:24.112185  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:24.112240  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:24.137517  404800 cri.go:89] found id: ""
	I1212 20:38:24.137532  404800 logs.go:282] 0 containers: []
	W1212 20:38:24.137539  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:24.137547  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:24.137557  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:24.207037  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:24.207056  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:24.222129  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:24.222144  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:24.288581  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:24.279746   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.280696   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282388   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.282920   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:24.284780   15331 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:24.288595  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:24.288605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:24.357884  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:24.357903  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:26.887217  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:26.897518  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:26.897580  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:26.926965  404800 cri.go:89] found id: ""
	I1212 20:38:26.926980  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.926987  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:26.926992  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:26.927052  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:26.952974  404800 cri.go:89] found id: ""
	I1212 20:38:26.952988  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.952995  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:26.953000  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:26.953060  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:26.978786  404800 cri.go:89] found id: ""
	I1212 20:38:26.978801  404800 logs.go:282] 0 containers: []
	W1212 20:38:26.978808  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:26.978813  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:26.978870  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:27.008564  404800 cri.go:89] found id: ""
	I1212 20:38:27.008580  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.008590  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:27.008595  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:27.008659  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:27.036286  404800 cri.go:89] found id: ""
	I1212 20:38:27.036301  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.036308  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:27.036313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:27.036391  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:27.061515  404800 cri.go:89] found id: ""
	I1212 20:38:27.061529  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.061536  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:27.061541  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:27.061604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:27.090603  404800 cri.go:89] found id: ""
	I1212 20:38:27.090617  404800 logs.go:282] 0 containers: []
	W1212 20:38:27.090624  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:27.090632  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:27.090642  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:27.159097  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:27.150336   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.151193   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.152795   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.153435   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:27.155082   15427 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:27.159107  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:27.159118  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:27.228300  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:27.228321  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:27.258850  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:27.258867  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:27.328117  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:27.328139  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:29.843406  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:29.853466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:29.853526  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:29.878238  404800 cri.go:89] found id: ""
	I1212 20:38:29.878253  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.878260  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:29.878265  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:29.878323  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:29.907469  404800 cri.go:89] found id: ""
	I1212 20:38:29.907483  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.907490  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:29.907495  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:29.907550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:29.932873  404800 cri.go:89] found id: ""
	I1212 20:38:29.932887  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.932894  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:29.932900  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:29.932962  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:29.958139  404800 cri.go:89] found id: ""
	I1212 20:38:29.958153  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.958160  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:29.958165  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:29.958222  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:29.984390  404800 cri.go:89] found id: ""
	I1212 20:38:29.984405  404800 logs.go:282] 0 containers: []
	W1212 20:38:29.984412  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:29.984416  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:29.984474  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:30.027335  404800 cri.go:89] found id: ""
	I1212 20:38:30.027351  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.027360  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:30.027365  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:30.027440  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:30.094850  404800 cri.go:89] found id: ""
	I1212 20:38:30.094867  404800 logs.go:282] 0 containers: []
	W1212 20:38:30.094883  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:30.094911  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:30.094939  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:30.129199  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:30.129217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:30.196813  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:30.196832  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:30.212809  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:30.212829  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:30.281108  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:30.272853   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.273567   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275146   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.275609   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:30.277153   15549 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:30.281119  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:30.281130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:32.853025  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:32.863369  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:32.863434  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:32.890487  404800 cri.go:89] found id: ""
	I1212 20:38:32.890501  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.890508  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:32.890513  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:32.890570  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:32.915071  404800 cri.go:89] found id: ""
	I1212 20:38:32.915085  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.915093  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:32.915098  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:32.915155  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:32.940096  404800 cri.go:89] found id: ""
	I1212 20:38:32.940117  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.940131  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:32.940142  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:32.940234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:32.965615  404800 cri.go:89] found id: ""
	I1212 20:38:32.965629  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.965644  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:32.965649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:32.965705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:32.990438  404800 cri.go:89] found id: ""
	I1212 20:38:32.990452  404800 logs.go:282] 0 containers: []
	W1212 20:38:32.990459  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:32.990466  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:32.990527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:33.018112  404800 cri.go:89] found id: ""
	I1212 20:38:33.018134  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.018141  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:33.018146  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:33.018213  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:33.045014  404800 cri.go:89] found id: ""
	I1212 20:38:33.045029  404800 logs.go:282] 0 containers: []
	W1212 20:38:33.045036  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:33.045043  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:33.045054  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:33.116627  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:33.116649  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:33.131589  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:33.131605  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:33.200143  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:33.191174   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.192118   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.193903   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.194394   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:33.196060   15642 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:33.200152  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:33.200165  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:33.270338  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:33.270359  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:35.806115  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:35.816131  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:35.816187  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:35.841646  404800 cri.go:89] found id: ""
	I1212 20:38:35.841660  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.841667  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:35.841672  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:35.841728  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:35.871233  404800 cri.go:89] found id: ""
	I1212 20:38:35.871247  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.871254  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:35.871259  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:35.871316  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:35.896270  404800 cri.go:89] found id: ""
	I1212 20:38:35.896285  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.896292  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:35.896297  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:35.896354  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:35.923679  404800 cri.go:89] found id: ""
	I1212 20:38:35.923693  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.923700  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:35.923705  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:35.923796  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:35.950841  404800 cri.go:89] found id: ""
	I1212 20:38:35.950856  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.950862  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:35.950867  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:35.950924  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:35.981198  404800 cri.go:89] found id: ""
	I1212 20:38:35.981212  404800 logs.go:282] 0 containers: []
	W1212 20:38:35.981219  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:35.981224  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:35.981282  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:36.016848  404800 cri.go:89] found id: ""
	I1212 20:38:36.016865  404800 logs.go:282] 0 containers: []
	W1212 20:38:36.016872  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:36.016881  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:36.016892  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:36.085541  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:36.085562  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:36.100886  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:36.100904  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:36.169874  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:36.161259   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.162033   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.163626   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.164180   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:36.165318   15748 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:36.169886  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:36.169897  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:36.239866  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:36.239886  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:38.770757  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:38.781375  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:38.781433  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:38.809421  404800 cri.go:89] found id: ""
	I1212 20:38:38.809436  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.809443  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:38.809448  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:38.809506  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:38.839566  404800 cri.go:89] found id: ""
	I1212 20:38:38.839579  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.839586  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:38.839591  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:38.839652  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:38.865187  404800 cri.go:89] found id: ""
	I1212 20:38:38.865201  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.865208  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:38.865213  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:38.865272  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:38.890808  404800 cri.go:89] found id: ""
	I1212 20:38:38.890822  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.890829  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:38.890835  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:38.890891  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:38.917091  404800 cri.go:89] found id: ""
	I1212 20:38:38.917104  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.917117  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:38.917122  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:38.917179  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:38.942942  404800 cri.go:89] found id: ""
	I1212 20:38:38.942957  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.942964  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:38.942970  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:38.943030  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:38.973257  404800 cri.go:89] found id: ""
	I1212 20:38:38.973271  404800 logs.go:282] 0 containers: []
	W1212 20:38:38.973278  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:38.973286  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:38.973296  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:39.043336  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:39.043356  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:39.072568  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:39.072588  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:39.140916  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:39.140937  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:39.157933  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:39.157949  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:39.223417  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:39.215410   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.216412   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.217404   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.218045   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:39.219600   15866 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:41.723637  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:41.734660  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:41.734716  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:41.767247  404800 cri.go:89] found id: ""
	I1212 20:38:41.767262  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.767269  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:41.767275  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:41.767328  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:41.796221  404800 cri.go:89] found id: ""
	I1212 20:38:41.796235  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.796248  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:41.796253  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:41.796312  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:41.821187  404800 cri.go:89] found id: ""
	I1212 20:38:41.821203  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.821216  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:41.821221  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:41.821284  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:41.847287  404800 cri.go:89] found id: ""
	I1212 20:38:41.847301  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.847308  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:41.847313  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:41.847372  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:41.872067  404800 cri.go:89] found id: ""
	I1212 20:38:41.872082  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.872089  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:41.872093  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:41.872152  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:41.897796  404800 cri.go:89] found id: ""
	I1212 20:38:41.897811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.897818  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:41.897823  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:41.897881  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:41.923795  404800 cri.go:89] found id: ""
	I1212 20:38:41.923811  404800 logs.go:282] 0 containers: []
	W1212 20:38:41.923818  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:41.923825  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:41.923836  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:41.990470  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:41.990491  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:42.009111  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:42.009130  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:42.088409  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:42.077817   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.078495   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.081716   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.082488   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:42.083610   15960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:42.088421  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:42.088433  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:42.192507  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:42.192534  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:44.727139  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:44.739542  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:44.739600  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:44.773501  404800 cri.go:89] found id: ""
	I1212 20:38:44.773515  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.773522  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:44.773527  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:44.773589  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:44.800128  404800 cri.go:89] found id: ""
	I1212 20:38:44.800142  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.800149  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:44.800154  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:44.800211  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:44.825549  404800 cri.go:89] found id: ""
	I1212 20:38:44.825563  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.825571  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:44.825576  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:44.825641  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:44.851616  404800 cri.go:89] found id: ""
	I1212 20:38:44.851630  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.851637  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:44.851642  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:44.851701  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:44.877278  404800 cri.go:89] found id: ""
	I1212 20:38:44.877293  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.877300  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:44.877305  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:44.877365  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:44.905623  404800 cri.go:89] found id: ""
	I1212 20:38:44.905637  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.905644  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:44.905649  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:44.905705  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:44.931299  404800 cri.go:89] found id: ""
	I1212 20:38:44.931313  404800 logs.go:282] 0 containers: []
	W1212 20:38:44.931319  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:44.931327  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:44.931338  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:44.998840  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:44.998865  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:45.080550  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:45.080572  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:45.173764  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:45.161784   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.162860   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.164308   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166462   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:45.166938   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:45.173775  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:45.173787  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:45.264449  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:45.264506  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:47.816513  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:47.826919  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:47.826978  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:47.856068  404800 cri.go:89] found id: ""
	I1212 20:38:47.856083  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.856090  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:47.856095  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:47.856154  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:47.883508  404800 cri.go:89] found id: ""
	I1212 20:38:47.883522  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.883529  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:47.883534  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:47.883595  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:47.909513  404800 cri.go:89] found id: ""
	I1212 20:38:47.909527  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.909534  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:47.909539  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:47.909617  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:47.939000  404800 cri.go:89] found id: ""
	I1212 20:38:47.939015  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.939022  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:47.939027  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:47.939084  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:47.965875  404800 cri.go:89] found id: ""
	I1212 20:38:47.965889  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.965897  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:47.965902  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:47.965975  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:47.992041  404800 cri.go:89] found id: ""
	I1212 20:38:47.992056  404800 logs.go:282] 0 containers: []
	W1212 20:38:47.992063  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:47.992068  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:47.992127  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:48.022837  404800 cri.go:89] found id: ""
	I1212 20:38:48.022852  404800 logs.go:282] 0 containers: []
	W1212 20:38:48.022860  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:48.022867  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:48.022880  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:48.039393  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:48.039410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:48.107317  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:48.098264   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.099224   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.100841   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.101682   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:48.102665   16171 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:48.107328  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:48.107340  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:48.175841  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:48.175861  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:48.210572  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:48.210594  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:50.783090  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:50.796736  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:50.796840  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:50.825233  404800 cri.go:89] found id: ""
	I1212 20:38:50.825248  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.825255  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:50.825261  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:50.825319  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:50.852180  404800 cri.go:89] found id: ""
	I1212 20:38:50.852194  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.852201  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:50.852206  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:50.852262  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:50.878747  404800 cri.go:89] found id: ""
	I1212 20:38:50.878763  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.878770  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:50.878775  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:50.878835  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:50.904522  404800 cri.go:89] found id: ""
	I1212 20:38:50.904536  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.904543  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:50.904548  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:50.904604  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:50.931344  404800 cri.go:89] found id: ""
	I1212 20:38:50.931360  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.931367  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:50.931372  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:50.931428  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:50.957483  404800 cri.go:89] found id: ""
	I1212 20:38:50.957498  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.957505  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:50.957510  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:50.957568  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:50.982756  404800 cri.go:89] found id: ""
	I1212 20:38:50.982771  404800 logs.go:282] 0 containers: []
	W1212 20:38:50.982778  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:50.982785  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:50.982796  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:51.050968  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:51.050990  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:51.066537  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:51.066556  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:51.139075  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:51.129544   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.130952   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.132306   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.133118   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:51.134432   16277 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:51.139089  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:51.139101  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:51.210713  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:51.210734  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.744531  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:53.755115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:53.755176  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:53.782428  404800 cri.go:89] found id: ""
	I1212 20:38:53.782443  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.782450  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:53.782455  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:53.782513  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:53.809102  404800 cri.go:89] found id: ""
	I1212 20:38:53.809116  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.809123  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:53.809128  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:53.809188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:53.836479  404800 cri.go:89] found id: ""
	I1212 20:38:53.836492  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.836500  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:53.836505  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:53.836567  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:53.862110  404800 cri.go:89] found id: ""
	I1212 20:38:53.862124  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.862131  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:53.862136  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:53.862193  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:53.888092  404800 cri.go:89] found id: ""
	I1212 20:38:53.888112  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.888119  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:53.888124  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:53.888188  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:53.918381  404800 cri.go:89] found id: ""
	I1212 20:38:53.918412  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.918419  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:53.918425  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:53.918482  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:53.944685  404800 cri.go:89] found id: ""
	I1212 20:38:53.944700  404800 logs.go:282] 0 containers: []
	W1212 20:38:53.944707  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:53.944715  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:53.944726  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:53.976361  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:53.976398  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:54.043617  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:54.043638  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:54.059716  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:54.059735  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:54.127525  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:54.119445   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.119949   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121471   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.121928   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:54.123395   16392 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:54.127535  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:54.127550  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:56.697671  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:56.712906  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:56.712987  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:56.745699  404800 cri.go:89] found id: ""
	I1212 20:38:56.745713  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.745721  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:56.745726  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:56.745780  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:56.774995  404800 cri.go:89] found id: ""
	I1212 20:38:56.775008  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.775015  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:56.775022  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:56.775076  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:56.801088  404800 cri.go:89] found id: ""
	I1212 20:38:56.801102  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.801109  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:56.801115  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:56.801171  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:56.825939  404800 cri.go:89] found id: ""
	I1212 20:38:56.825953  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.825960  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:56.825965  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:56.826020  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:56.851013  404800 cri.go:89] found id: ""
	I1212 20:38:56.851028  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.851035  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:56.851040  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:56.851099  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:56.875791  404800 cri.go:89] found id: ""
	I1212 20:38:56.875815  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.875823  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:56.875829  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:56.875894  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:56.902106  404800 cri.go:89] found id: ""
	I1212 20:38:56.902121  404800 logs.go:282] 0 containers: []
	W1212 20:38:56.902128  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:56.902136  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:38:56.902146  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:38:56.933095  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:56.933112  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:56.999748  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:56.999770  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:57.023866  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:57.023882  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:38:57.095113  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:38:57.086986   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.087518   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089030   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.089355   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:38:57.090800   16498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:38:57.095123  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:38:57.095133  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:38:59.665770  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:38:59.675717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:38:59.675792  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:38:59.701606  404800 cri.go:89] found id: ""
	I1212 20:38:59.701620  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.701626  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:38:59.701631  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:38:59.701688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:38:59.736582  404800 cri.go:89] found id: ""
	I1212 20:38:59.736597  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.736603  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:38:59.736609  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:38:59.736666  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:38:59.764566  404800 cri.go:89] found id: ""
	I1212 20:38:59.764588  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.764595  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:38:59.764602  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:38:59.764664  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:38:59.793759  404800 cri.go:89] found id: ""
	I1212 20:38:59.793774  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.793781  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:38:59.793786  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:38:59.793858  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:38:59.821810  404800 cri.go:89] found id: ""
	I1212 20:38:59.821824  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.821841  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:38:59.821846  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:38:59.821903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:38:59.851583  404800 cri.go:89] found id: ""
	I1212 20:38:59.851606  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.851614  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:38:59.851619  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:38:59.851688  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:38:59.878726  404800 cri.go:89] found id: ""
	I1212 20:38:59.878740  404800 logs.go:282] 0 containers: []
	W1212 20:38:59.878746  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:38:59.878754  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:38:59.878764  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:38:59.943708  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:38:59.943728  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:38:59.958686  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:38:59.958704  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:00.056135  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:00.034453   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.036639   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.037425   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.039837   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:00.045102   16593 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:00.056146  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:00.056159  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:00.155066  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:00.155091  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:02.718200  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:02.729492  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:02.729550  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:02.760544  404800 cri.go:89] found id: ""
	I1212 20:39:02.760559  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.760566  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:02.760571  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:02.760635  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:02.792146  404800 cri.go:89] found id: ""
	I1212 20:39:02.792161  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.792174  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:02.792180  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:02.792239  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:02.818586  404800 cri.go:89] found id: ""
	I1212 20:39:02.818601  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.818609  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:02.818614  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:02.818678  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:02.844172  404800 cri.go:89] found id: ""
	I1212 20:39:02.844187  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.844194  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:02.844199  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:02.844256  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:02.871047  404800 cri.go:89] found id: ""
	I1212 20:39:02.871061  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.871069  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:02.871074  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:02.871132  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:02.898048  404800 cri.go:89] found id: ""
	I1212 20:39:02.898062  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.898070  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:02.898075  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:02.898131  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:02.923194  404800 cri.go:89] found id: ""
	I1212 20:39:02.923209  404800 logs.go:282] 0 containers: []
	W1212 20:39:02.923216  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:02.923224  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:02.923234  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:02.988912  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:02.988932  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:03.004362  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:03.004410  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:03.075259  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:03.067064   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.067768   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069384   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.069725   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:03.071272   16698 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:03.075269  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:03.075280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:03.148856  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:03.148876  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:05.677035  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:05.686903  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:05.686961  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:05.722182  404800 cri.go:89] found id: ""
	I1212 20:39:05.722197  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.722204  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:05.722211  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:05.722309  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:05.756818  404800 cri.go:89] found id: ""
	I1212 20:39:05.756832  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.756839  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:05.756844  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:05.756946  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:05.785780  404800 cri.go:89] found id: ""
	I1212 20:39:05.785794  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.785801  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:05.785806  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:05.785862  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:05.816052  404800 cri.go:89] found id: ""
	I1212 20:39:05.816066  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.816073  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:05.816078  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:05.816134  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:05.841695  404800 cri.go:89] found id: ""
	I1212 20:39:05.841709  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.841716  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:05.841721  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:05.841782  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:05.868902  404800 cri.go:89] found id: ""
	I1212 20:39:05.868917  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.868924  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:05.868929  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:05.868998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:05.898574  404800 cri.go:89] found id: ""
	I1212 20:39:05.898589  404800 logs.go:282] 0 containers: []
	W1212 20:39:05.898596  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:05.898603  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:05.898617  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:05.966027  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:05.966048  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:05.980827  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:05.980843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:06.048518  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:06.039273   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.039766   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041577   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.041956   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:06.043588   16804 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:06.048528  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:06.048539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:06.118539  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:06.118566  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:08.648618  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:08.659086  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:08.659147  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:08.684568  404800 cri.go:89] found id: ""
	I1212 20:39:08.684583  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.684590  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:08.684595  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:08.684655  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:08.714848  404800 cri.go:89] found id: ""
	I1212 20:39:08.714862  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.714869  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:08.714873  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:08.714942  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:08.749610  404800 cri.go:89] found id: ""
	I1212 20:39:08.749636  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.749643  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:08.749654  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:08.749720  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:08.780856  404800 cri.go:89] found id: ""
	I1212 20:39:08.780871  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.780878  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:08.780883  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:08.780943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:08.805202  404800 cri.go:89] found id: ""
	I1212 20:39:08.805216  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.805223  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:08.805228  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:08.805287  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:08.830301  404800 cri.go:89] found id: ""
	I1212 20:39:08.830317  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.830324  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:08.830329  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:08.830389  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:08.857083  404800 cri.go:89] found id: ""
	I1212 20:39:08.857098  404800 logs.go:282] 0 containers: []
	W1212 20:39:08.857105  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:08.857113  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:08.857124  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:08.925442  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:08.925464  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:08.940523  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:08.940539  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:09.013233  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:08.997498   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.998019   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:08.999823   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.000173   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:09.008193   16908 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:09.013243  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:09.013254  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:09.085178  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:09.085198  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.613987  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:11.624006  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:11.624073  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:11.648868  404800 cri.go:89] found id: ""
	I1212 20:39:11.648883  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.648890  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:11.648902  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:11.648959  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:11.673750  404800 cri.go:89] found id: ""
	I1212 20:39:11.673764  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.673771  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:11.673776  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:11.673837  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:11.701310  404800 cri.go:89] found id: ""
	I1212 20:39:11.701324  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.701340  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:11.701347  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:11.701407  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:11.728807  404800 cri.go:89] found id: ""
	I1212 20:39:11.728821  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.728828  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:11.728833  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:11.728898  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:11.762671  404800 cri.go:89] found id: ""
	I1212 20:39:11.762706  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.762715  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:11.762720  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:11.762786  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:11.788450  404800 cri.go:89] found id: ""
	I1212 20:39:11.788481  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.788488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:11.788493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:11.788559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:11.816693  404800 cri.go:89] found id: ""
	I1212 20:39:11.816707  404800 logs.go:282] 0 containers: []
	W1212 20:39:11.816714  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:11.816722  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:11.816732  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:11.886583  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:11.878248   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.878964   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.880707   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.881208   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:11.882676   17005 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:11.886593  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:11.886604  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:11.955026  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:11.955046  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:11.984471  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:11.984489  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:12.054196  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:12.054217  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.569266  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:14.579178  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:14.579234  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:14.603297  404800 cri.go:89] found id: ""
	I1212 20:39:14.603312  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.603319  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:14.603324  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:14.603381  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:14.628304  404800 cri.go:89] found id: ""
	I1212 20:39:14.628318  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.628325  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:14.628330  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:14.628404  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:14.653112  404800 cri.go:89] found id: ""
	I1212 20:39:14.653126  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.653133  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:14.653138  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:14.653201  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:14.678048  404800 cri.go:89] found id: ""
	I1212 20:39:14.678063  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.678078  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:14.678083  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:14.678141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:14.710561  404800 cri.go:89] found id: ""
	I1212 20:39:14.710584  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.710592  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:14.710597  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:14.710662  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:14.744837  404800 cri.go:89] found id: ""
	I1212 20:39:14.744862  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.744870  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:14.744876  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:14.744943  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:14.777906  404800 cri.go:89] found id: ""
	I1212 20:39:14.777920  404800 logs.go:282] 0 containers: []
	W1212 20:39:14.777927  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:14.777936  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:14.777946  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:14.844303  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:14.844323  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:14.859158  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:14.859179  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:14.922392  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:14.913424   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.913976   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.915631   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.916316   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:14.918007   17116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:14.922427  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:14.922438  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:14.992900  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:14.992920  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:17.545196  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:17.555712  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:17.555785  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:17.582444  404800 cri.go:89] found id: ""
	I1212 20:39:17.582458  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.582465  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:17.582470  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:17.582527  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:17.606892  404800 cri.go:89] found id: ""
	I1212 20:39:17.606906  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.606926  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:17.606932  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:17.606998  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:17.631824  404800 cri.go:89] found id: ""
	I1212 20:39:17.631840  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.631846  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:17.631851  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:17.631906  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:17.658525  404800 cri.go:89] found id: ""
	I1212 20:39:17.658540  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.658548  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:17.658553  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:17.658610  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:17.687764  404800 cri.go:89] found id: ""
	I1212 20:39:17.687777  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.687784  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:17.687789  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:17.687844  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:17.720465  404800 cri.go:89] found id: ""
	I1212 20:39:17.720480  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.720488  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:17.720493  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:17.720561  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:17.758231  404800 cri.go:89] found id: ""
	I1212 20:39:17.758245  404800 logs.go:282] 0 containers: []
	W1212 20:39:17.758261  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:17.758270  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:17.758281  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:17.838248  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:17.838280  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:17.852734  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:17.852752  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:17.918178  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:17.909812   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.910592   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912169   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.912772   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:17.914355   17220 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:17.918190  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:17.918202  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:17.985880  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:17.985901  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:20.529812  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:20.539894  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:39:20.539954  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:39:20.564821  404800 cri.go:89] found id: ""
	I1212 20:39:20.564834  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.564841  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:39:20.564846  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:39:20.564903  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:39:20.594524  404800 cri.go:89] found id: ""
	I1212 20:39:20.594538  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.594544  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:39:20.594549  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:39:20.594606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:39:20.619997  404800 cri.go:89] found id: ""
	I1212 20:39:20.620011  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.620018  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:39:20.620023  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:39:20.620079  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:39:20.644542  404800 cri.go:89] found id: ""
	I1212 20:39:20.644557  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.644564  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:39:20.644569  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:39:20.644624  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:39:20.670273  404800 cri.go:89] found id: ""
	I1212 20:39:20.670289  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.670296  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:39:20.670302  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:39:20.670358  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:39:20.694691  404800 cri.go:89] found id: ""
	I1212 20:39:20.694705  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.694712  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:39:20.694717  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:39:20.694771  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:39:20.739770  404800 cri.go:89] found id: ""
	I1212 20:39:20.739784  404800 logs.go:282] 0 containers: []
	W1212 20:39:20.739791  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:39:20.739798  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:39:20.739809  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:39:20.810407  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:39:20.810429  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:39:20.825194  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:39:20.825210  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:39:20.899009  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:39:20.889886   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.890662   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.892566   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.893441   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:39:20.894986   17327 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:39:20.899020  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:39:20.899032  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:39:20.977107  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:39:20.977129  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 20:39:23.510601  404800 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:39:23.521033  404800 kubeadm.go:602] duration metric: took 4m3.32729864s to restartPrimaryControlPlane
	W1212 20:39:23.521093  404800 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 20:39:23.521166  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:39:23.936973  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:39:23.949604  404800 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 20:39:23.957638  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:39:23.957691  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:39:23.965470  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:39:23.965481  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:39:23.965536  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:39:23.973241  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:39:23.973300  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:39:23.980875  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:39:23.989722  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:39:23.989777  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:39:23.997778  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.007027  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:39:24.007112  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:39:24.016721  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:39:24.025622  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:39:24.025690  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:39:24.034033  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:39:24.077877  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:39:24.079077  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:39:24.152874  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:39:24.152937  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:39:24.152972  404800 kubeadm.go:319] OS: Linux
	I1212 20:39:24.153034  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:39:24.153081  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:39:24.153126  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:39:24.153178  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:39:24.153225  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:39:24.153271  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:39:24.153314  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:39:24.153363  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:39:24.153407  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:39:24.219483  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:39:24.219589  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:39:24.219678  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:39:24.228954  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:39:24.234481  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:39:24.234574  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:39:24.234638  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:39:24.234713  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:39:24.234772  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:39:24.234841  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:39:24.234896  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:39:24.234958  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:39:24.235017  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:39:24.235090  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:39:24.235172  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:39:24.235208  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:39:24.235263  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:39:24.294876  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:39:24.534877  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:39:24.632916  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:39:24.763704  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:39:25.183116  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:39:25.183864  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:39:25.186637  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:39:25.190125  404800 out.go:252]   - Booting up control plane ...
	I1212 20:39:25.190229  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:39:25.190325  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:39:25.190412  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:39:25.205322  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:39:25.205427  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:39:25.215814  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:39:25.216163  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:39:25.216236  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:39:25.353073  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:39:25.353188  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:43:25.353162  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000280513s
	I1212 20:43:25.353205  404800 kubeadm.go:319] 
	I1212 20:43:25.353282  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:43:25.353332  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:43:25.353453  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:43:25.353461  404800 kubeadm.go:319] 
	I1212 20:43:25.353609  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:43:25.353657  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:43:25.353688  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:43:25.353691  404800 kubeadm.go:319] 
	I1212 20:43:25.359119  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:43:25.359579  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:43:25.359715  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:43:25.360004  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:43:25.360010  404800 kubeadm.go:319] 
	I1212 20:43:25.360149  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	W1212 20:43:25.360245  404800 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000280513s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 20:43:25.360353  404800 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 20:43:25.770646  404800 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:43:25.783563  404800 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 20:43:25.783624  404800 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 20:43:25.791806  404800 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 20:43:25.791814  404800 kubeadm.go:158] found existing configuration files:
	
	I1212 20:43:25.791862  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1212 20:43:25.799745  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 20:43:25.799799  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 20:43:25.807302  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1212 20:43:25.815035  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 20:43:25.815084  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 20:43:25.822960  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.831068  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 20:43:25.831122  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 20:43:25.838463  404800 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1212 20:43:25.846379  404800 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 20:43:25.846433  404800 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 20:43:25.853821  404800 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 20:43:25.894714  404800 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 20:43:25.895009  404800 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 20:43:25.961164  404800 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 20:43:25.961230  404800 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 20:43:25.961265  404800 kubeadm.go:319] OS: Linux
	I1212 20:43:25.961309  404800 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 20:43:25.961355  404800 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 20:43:25.961404  404800 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 20:43:25.961451  404800 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 20:43:25.961498  404800 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 20:43:25.961544  404800 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 20:43:25.961587  404800 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 20:43:25.961634  404800 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 20:43:25.961678  404800 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 20:43:26.029509  404800 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 20:43:26.029612  404800 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 20:43:26.029701  404800 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 20:43:26.038278  404800 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 20:43:26.041933  404800 out.go:252]   - Generating certificates and keys ...
	I1212 20:43:26.042043  404800 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 20:43:26.042118  404800 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 20:43:26.042200  404800 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 20:43:26.042265  404800 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 20:43:26.042338  404800 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 20:43:26.042395  404800 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 20:43:26.042462  404800 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 20:43:26.042527  404800 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 20:43:26.042606  404800 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 20:43:26.042683  404800 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 20:43:26.042722  404800 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 20:43:26.042781  404800 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 20:43:26.129341  404800 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 20:43:26.328670  404800 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 20:43:26.553215  404800 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 20:43:26.647700  404800 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 20:43:26.895572  404800 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 20:43:26.896106  404800 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 20:43:26.898924  404800 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 20:43:26.902076  404800 out.go:252]   - Booting up control plane ...
	I1212 20:43:26.902180  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 20:43:26.902266  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 20:43:26.902331  404800 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 20:43:26.916276  404800 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 20:43:26.916395  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 20:43:26.923968  404800 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 20:43:26.925348  404800 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 20:43:26.925393  404800 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 20:43:27.058187  404800 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 20:43:27.058300  404800 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 20:47:27.059387  404800 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001189054s
	I1212 20:47:27.059415  404800 kubeadm.go:319] 
	I1212 20:47:27.059512  404800 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 20:47:27.059567  404800 kubeadm.go:319] 	- The kubelet is not running
	I1212 20:47:27.059889  404800 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 20:47:27.059895  404800 kubeadm.go:319] 
	I1212 20:47:27.060100  404800 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 20:47:27.060426  404800 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 20:47:27.060479  404800 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 20:47:27.060483  404800 kubeadm.go:319] 
	I1212 20:47:27.064619  404800 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 20:47:27.065062  404800 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 20:47:27.065168  404800 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 20:47:27.065401  404800 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 20:47:27.065405  404800 kubeadm.go:319] 
	I1212 20:47:27.065522  404800 kubeadm.go:403] duration metric: took 12m6.90957682s to StartCluster
	I1212 20:47:27.065550  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 20:47:27.065606  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 20:47:27.065669  404800 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 20:47:27.091473  404800 cri.go:89] found id: ""
	I1212 20:47:27.091488  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.091495  404800 logs.go:284] No container was found matching "kube-apiserver"
	I1212 20:47:27.091500  404800 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 20:47:27.091559  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 20:47:27.118055  404800 cri.go:89] found id: ""
	I1212 20:47:27.118069  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.118076  404800 logs.go:284] No container was found matching "etcd"
	I1212 20:47:27.118081  404800 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 20:47:27.118141  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 20:47:27.144553  404800 cri.go:89] found id: ""
	I1212 20:47:27.144567  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.144574  404800 logs.go:284] No container was found matching "coredns"
	I1212 20:47:27.144579  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 20:47:27.144636  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 20:47:27.170138  404800 cri.go:89] found id: ""
	I1212 20:47:27.170152  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.170172  404800 logs.go:284] No container was found matching "kube-scheduler"
	I1212 20:47:27.170177  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 20:47:27.170242  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 20:47:27.199222  404800 cri.go:89] found id: ""
	I1212 20:47:27.199236  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.199243  404800 logs.go:284] No container was found matching "kube-proxy"
	I1212 20:47:27.199248  404800 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 20:47:27.199305  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 20:47:27.225906  404800 cri.go:89] found id: ""
	I1212 20:47:27.225921  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.225929  404800 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 20:47:27.225934  404800 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 20:47:27.225993  404800 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 20:47:27.251774  404800 cri.go:89] found id: ""
	I1212 20:47:27.251788  404800 logs.go:282] 0 containers: []
	W1212 20:47:27.251795  404800 logs.go:284] No container was found matching "kindnet"
	I1212 20:47:27.251803  404800 logs.go:123] Gathering logs for kubelet ...
	I1212 20:47:27.251843  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 20:47:27.318965  404800 logs.go:123] Gathering logs for dmesg ...
	I1212 20:47:27.318984  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 20:47:27.336153  404800 logs.go:123] Gathering logs for describe nodes ...
	I1212 20:47:27.336169  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 20:47:27.403235  404800 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1212 20:47:27.394974   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.395673   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397398   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.397865   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:47:27.399347   21088 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 20:47:27.403245  404800 logs.go:123] Gathering logs for CRI-O ...
	I1212 20:47:27.403256  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 20:47:27.475348  404800 logs.go:123] Gathering logs for container status ...
	I1212 20:47:27.475369  404800 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 20:47:27.504551  404800 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 20:47:27.504592  404800 out.go:285] * 
	W1212 20:47:27.504699  404800 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.504759  404800 out.go:285] * 
	W1212 20:47:27.507341  404800 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 20:47:27.514164  404800 out.go:203] 
	W1212 20:47:27.517009  404800 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001189054s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 20:47:27.517056  404800 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 20:47:27.517078  404800 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 20:47:27.520151  404800 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617557022Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617594914Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617644933Z" level=info msg="Create NRI interface"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617744979Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617956402Z" level=info msg="runtime interface created"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617981551Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.617990003Z" level=info msg="runtime interface starting up..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618002294Z" level=info msg="starting plugins..."
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618017146Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 20:35:18 functional-261311 crio[9936]: time="2025-12-12T20:35:18.618092166Z" level=info msg="No systemd watchdog enabled"
	Dec 12 20:35:18 functional-261311 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223066755Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=efc21d87-a1b0-4de5-a48b-a3e014a5db32 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.223827337Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=e9bb6f76-9bf0-445e-a911-5989a7f224b5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224384709Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=eb32b7e0-d164-45f4-be96-6799b271663a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.224808771Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=192a05d5-754c-4620-9a7e-630a23b2f5d5 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225240365Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=d03d55da-4587-4eea-8a9a-e52381826a03 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.225676677Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=c7d002dd-9552-4715-b7be-2078da811840 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:39:24 functional-261311 crio[9936]: time="2025-12-12T20:39:24.226165084Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=daf96e40-8252-45d3-a005-ea53669f5cd1 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.033616408Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2a067e1-2c90-429c-b592-c0026a728c8d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.0344028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=faa7c5c2-57de-45d0-98b9-b1fc40b3897e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.034956632Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=86aed69e-89fa-4789-b7e0-66c21b53b655 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.035606867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e8b67148-2c8a-4d5b-8bc5-9c052262c589 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036159986Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56696416-3d0f-4c77-8dbb-77790563b13a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036707976Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ba267d45-19b4-448f-9f92-2993fe38692a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.037209312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=301781f2-4844-424c-a8ec-9528bb0007ad name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:49:19.524104   22560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:19.524698   22560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:19.526228   22560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:19.526804   22560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:19.528338   22560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:49:19 up  3:31,  0 user,  load average: 0.38, 0.26, 0.52
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:49:17 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:17 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1109.
	Dec 12 20:49:17 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:17 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:18 functional-261311 kubelet[22448]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:18 functional-261311 kubelet[22448]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:18 functional-261311 kubelet[22448]: E1212 20:49:18.018667   22448 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:18 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:18 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:18 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1110.
	Dec 12 20:49:18 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:18 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:18 functional-261311 kubelet[22469]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:18 functional-261311 kubelet[22469]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:18 functional-261311 kubelet[22469]: E1212 20:49:18.768868   22469 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:18 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:18 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:19 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1111.
	Dec 12 20:49:19 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:19 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:19 functional-261311 kubelet[22553]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:19 functional-261311 kubelet[22553]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:19 functional-261311 kubelet[22553]: E1212 20:49:19.510385   22553 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:19 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:19 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (387.655145ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (2.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1212 20:47:44.061068  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:47:45.899818  364853 retry.go:31] will retry after 2.780390153s: Temporary Error: Get "http://10.111.19.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:47:58.680652  364853 retry.go:31] will retry after 3.765324398s: Temporary Error: Get "http://10.111.19.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:48:12.447040  364853 retry.go:31] will retry after 7.567055353s: Temporary Error: Get "http://10.111.19.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:48:30.019983  364853 retry.go:31] will retry after 8.681534034s: Temporary Error: Get "http://10.111.19.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
I1212 20:48:48.702712  364853 retry.go:31] will retry after 19.051848551s: Temporary Error: Get "http://10.111.19.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
E1212 20:50:47.140161  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": dial tcp 192.168.49.2:8441: connect: connection refused
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (320.864253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (331.548611ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-261311 image load --daemon kicbase/echo-server:functional-261311 --alsologtostderr                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image save kicbase/echo-server:functional-261311 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image rm kicbase/echo-server:functional-261311 --alsologtostderr                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image save --daemon kicbase/echo-server:functional-261311 --alsologtostderr                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh sudo cat /etc/ssl/certs/364853.pem                                                                                                  │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh sudo cat /usr/share/ca-certificates/364853.pem                                                                                      │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh sudo cat /etc/ssl/certs/3648532.pem                                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh sudo cat /usr/share/ca-certificates/3648532.pem                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh sudo cat /etc/test/nested/copy/364853/hosts                                                                                         │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls --format short --alsologtostderr                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls --format yaml --alsologtostderr                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh            │ functional-261311 ssh pgrep buildkitd                                                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ image          │ functional-261311 image build -t localhost/my-image:functional-261311 testdata/build --alsologtostderr                                                    │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls --format json --alsologtostderr                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image          │ functional-261311 image ls --format table --alsologtostderr                                                                                               │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ update-context │ functional-261311 update-context --alsologtostderr -v=2                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ update-context │ functional-261311 update-context --alsologtostderr -v=2                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ update-context │ functional-261311 update-context --alsologtostderr -v=2                                                                                                   │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:49:35
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:49:35.533502  422059 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:49:35.533654  422059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.533680  422059 out.go:374] Setting ErrFile to fd 2...
	I1212 20:49:35.533686  422059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.533997  422059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:49:35.534386  422059 out.go:368] Setting JSON to false
	I1212 20:49:35.535259  422059 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12728,"bootTime":1765559848,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:49:35.535328  422059 start.go:143] virtualization:  
	I1212 20:49:35.538650  422059 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:49:35.541685  422059 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:49:35.541766  422059 notify.go:221] Checking for updates...
	I1212 20:49:35.547510  422059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:49:35.550302  422059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:49:35.553198  422059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:49:35.556172  422059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:49:35.559136  422059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:49:35.562577  422059 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:49:35.563232  422059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:49:35.589863  422059 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:49:35.589981  422059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.646483  422059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.637420895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.646591  422059 docker.go:319] overlay module found
	I1212 20:49:35.649676  422059 out.go:179] * Using the docker driver based on existing profile
	I1212 20:49:35.652473  422059 start.go:309] selected driver: docker
	I1212 20:49:35.652493  422059 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.652603  422059 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:49:35.652719  422059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.709556  422059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.699409249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.710002  422059 cni.go:84] Creating CNI manager for ""
	I1212 20:49:35.710068  422059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:49:35.710110  422059 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.713406  422059 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.033616408Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2a067e1-2c90-429c-b592-c0026a728c8d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.0344028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=faa7c5c2-57de-45d0-98b9-b1fc40b3897e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.034956632Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=86aed69e-89fa-4789-b7e0-66c21b53b655 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.035606867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e8b67148-2c8a-4d5b-8bc5-9c052262c589 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036159986Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56696416-3d0f-4c77-8dbb-77790563b13a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036707976Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ba267d45-19b4-448f-9f92-2993fe38692a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.037209312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=301781f2-4844-424c-a8ec-9528bb0007ad name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.218625071Z" level=info msg="Checking image status: kicbase/echo-server:functional-261311" id=cd3df827-6a6d-4d2c-bdcd-6faef85afd81 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.218794886Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.21883601Z" level=info msg="Image kicbase/echo-server:functional-261311 not found" id=cd3df827-6a6d-4d2c-bdcd-6faef85afd81 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.218898559Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-261311 found" id=cd3df827-6a6d-4d2c-bdcd-6faef85afd81 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.243479012Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-261311" id=4e1ece55-0275-485f-9da4-a32355aa568c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.243683912Z" level=info msg="Image docker.io/kicbase/echo-server:functional-261311 not found" id=4e1ece55-0275-485f-9da4-a32355aa568c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.243763789Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-261311 found" id=4e1ece55-0275-485f-9da4-a32355aa568c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.267873855Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-261311" id=6b6c5f17-032b-439f-9c95-0d2e4bde60be name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.268031534Z" level=info msg="Image localhost/kicbase/echo-server:functional-261311 not found" id=6b6c5f17-032b-439f-9c95-0d2e4bde60be name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.268081389Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-261311 found" id=6b6c5f17-032b-439f-9c95-0d2e4bde60be name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391094042Z" level=info msg="Checking image status: kicbase/echo-server:functional-261311" id=a659ef62-8ef4-4234-8cb9-e0c72f1e0d92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391367521Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391429118Z" level=info msg="Image kicbase/echo-server:functional-261311 not found" id=a659ef62-8ef4-4234-8cb9-e0c72f1e0d92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391507592Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-261311 found" id=a659ef62-8ef4-4234-8cb9-e0c72f1e0d92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.41766066Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-261311" id=8c5057e6-8710-4d7f-a6ae-b917d5fe72f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.417803824Z" level=info msg="Image docker.io/kicbase/echo-server:functional-261311 not found" id=8c5057e6-8710-4d7f-a6ae-b917d5fe72f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.417843972Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-261311 found" id=8c5057e6-8710-4d7f-a6ae-b917d5fe72f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.446745466Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-261311" id=a881d77f-4185-46d5-ac04-be4972d01e28 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:51:37.568799   25376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:51:37.569754   25376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:51:37.571440   25376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:51:37.571774   25376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:51:37.573153   25376 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:51:37 up  3:34,  0 user,  load average: 0.23, 0.32, 0.51
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:51:35 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:51:35 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1293.
	Dec 12 20:51:35 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:51:35 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:51:35 functional-261311 kubelet[25253]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:51:36 functional-261311 kubelet[25253]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:51:36 functional-261311 kubelet[25253]: E1212 20:51:36.002277   25253 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:51:36 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:51:36 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:51:36 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1294.
	Dec 12 20:51:36 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:51:36 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:51:36 functional-261311 kubelet[25273]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:51:36 functional-261311 kubelet[25273]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:51:36 functional-261311 kubelet[25273]: E1212 20:51:36.780875   25273 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:51:36 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:51:36 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:51:37 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1295.
	Dec 12 20:51:37 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:51:37 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:51:37 functional-261311 kubelet[25358]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:51:37 functional-261311 kubelet[25358]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:51:37 functional-261311 kubelet[25358]: E1212 20:51:37.517686   25358 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:51:37 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:51:37 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (315.754176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (241.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-261311 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-261311 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (61.893545ms)

                                                
                                                
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-261311 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
-- stdout --
	'Error executing template: template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range. Printing more information for debugging the template:
		template was:
			'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'
		raw data was:
			{"apiVersion":"v1","items":[],"kind":"List","metadata":{"resourceVersion":""}}
		object given to template engine was:
			map[apiVersion:v1 items:[] kind:List metadata:map[resourceVersion:]]
	

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?
	error executing template "'{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": template: output:1:20: executing "output" at <index .items 0>: error calling index: reflect: slice index out of range

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-261311
helpers_test.go:244: (dbg) docker inspect functional-261311:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	        "Created": "2025-12-12T20:20:33.89723681Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 393453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:20:33.965138507Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hostname",
	        "HostsPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/hosts",
	        "LogPath": "/var/lib/docker/containers/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f/42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f-json.log",
	        "Name": "/functional-261311",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-261311:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-261311",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "42ce82696e8ce8f59e6b37287e34fc79c7aaebb8240fabd8f0e8e9e08b594e2f",
	                "LowerDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec37aab217f085250c3d477db13ef541472488de06e9ac62904d956e329554c0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-261311",
	                "Source": "/var/lib/docker/volumes/functional-261311/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-261311",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-261311",
	                "name.minikube.sigs.k8s.io": "functional-261311",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "05aba127e6879200d8018d7504bfad081109086773354436d1df44aa1c14adbc",
	            "SandboxKey": "/var/run/docker/netns/05aba127e687",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33166"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33164"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33165"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-261311": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:f9:58:d8:6f:f0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f6e4f328ecfe4a2d56516335eca7292ffd836000116e27da670df3185da0d956",
	                    "EndpointID": "0fe49725d998defb3b59598100d492e045ffd349a0f1a02289172002ce9c9e2e",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-261311",
	                        "42ce82696e8c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p functional-261311 -n functional-261311: exit status 2 (315.359044ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs -n 25
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌───────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND  │                                                                           ARGS                                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├───────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount     │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount2 --alsologtostderr -v=1                      │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ mount     │ -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount3 --alsologtostderr -v=1                      │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh findmnt -T /mount1                                                                                                                  │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh findmnt -T /mount2                                                                                                                  │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh findmnt -T /mount3                                                                                                                  │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ mount     │ -p functional-261311 --kill=true                                                                                                                          │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ start     │ -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ start     │ -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ start     │ -p functional-261311 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ dashboard │ --url --port 36195 -p functional-261311 --alsologtostderr -v=1                                                                                            │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ license   │                                                                                                                                                           │ minikube          │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ ssh       │ functional-261311 ssh sudo systemctl is-active docker                                                                                                     │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ ssh       │ functional-261311 ssh sudo systemctl is-active containerd                                                                                                 │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │                     │
	│ image     │ functional-261311 image load --daemon kicbase/echo-server:functional-261311 --alsologtostderr                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image load --daemon kicbase/echo-server:functional-261311 --alsologtostderr                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image load --daemon kicbase/echo-server:functional-261311 --alsologtostderr                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image save kicbase/echo-server:functional-261311 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image rm kicbase/echo-server:functional-261311 --alsologtostderr                                                                        │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr                                       │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image ls                                                                                                                                │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	│ image     │ functional-261311 image save --daemon kicbase/echo-server:functional-261311 --alsologtostderr                                                             │ functional-261311 │ jenkins │ v1.37.0 │ 12 Dec 25 20:49 UTC │ 12 Dec 25 20:49 UTC │
	└───────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:49:35
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:49:35.533502  422059 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:49:35.533654  422059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.533680  422059 out.go:374] Setting ErrFile to fd 2...
	I1212 20:49:35.533686  422059 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.533997  422059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:49:35.534386  422059 out.go:368] Setting JSON to false
	I1212 20:49:35.535259  422059 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12728,"bootTime":1765559848,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:49:35.535328  422059 start.go:143] virtualization:  
	I1212 20:49:35.538650  422059 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:49:35.541685  422059 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:49:35.541766  422059 notify.go:221] Checking for updates...
	I1212 20:49:35.547510  422059 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:49:35.550302  422059 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:49:35.553198  422059 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:49:35.556172  422059 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:49:35.559136  422059 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:49:35.562577  422059 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:49:35.563232  422059 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:49:35.589863  422059 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:49:35.589981  422059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.646483  422059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.637420895 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.646591  422059 docker.go:319] overlay module found
	I1212 20:49:35.649676  422059 out.go:179] * Using the docker driver based on existing profile
	I1212 20:49:35.652473  422059 start.go:309] selected driver: docker
	I1212 20:49:35.652493  422059 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.652603  422059 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:49:35.652719  422059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.709556  422059 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.699409249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.710002  422059 cni.go:84] Creating CNI manager for ""
	I1212 20:49:35.710068  422059 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:49:35.710110  422059 start.go:353] cluster config:
	{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog
:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.713406  422059 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.033616408Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=e2a067e1-2c90-429c-b592-c0026a728c8d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.0344028Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=faa7c5c2-57de-45d0-98b9-b1fc40b3897e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.034956632Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=86aed69e-89fa-4789-b7e0-66c21b53b655 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.035606867Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=e8b67148-2c8a-4d5b-8bc5-9c052262c589 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036159986Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=56696416-3d0f-4c77-8dbb-77790563b13a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.036707976Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=ba267d45-19b4-448f-9f92-2993fe38692a name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:43:26 functional-261311 crio[9936]: time="2025-12-12T20:43:26.037209312Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=301781f2-4844-424c-a8ec-9528bb0007ad name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.218625071Z" level=info msg="Checking image status: kicbase/echo-server:functional-261311" id=cd3df827-6a6d-4d2c-bdcd-6faef85afd81 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.218794886Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.21883601Z" level=info msg="Image kicbase/echo-server:functional-261311 not found" id=cd3df827-6a6d-4d2c-bdcd-6faef85afd81 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.218898559Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-261311 found" id=cd3df827-6a6d-4d2c-bdcd-6faef85afd81 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.243479012Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-261311" id=4e1ece55-0275-485f-9da4-a32355aa568c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.243683912Z" level=info msg="Image docker.io/kicbase/echo-server:functional-261311 not found" id=4e1ece55-0275-485f-9da4-a32355aa568c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.243763789Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-261311 found" id=4e1ece55-0275-485f-9da4-a32355aa568c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.267873855Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-261311" id=6b6c5f17-032b-439f-9c95-0d2e4bde60be name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.268031534Z" level=info msg="Image localhost/kicbase/echo-server:functional-261311 not found" id=6b6c5f17-032b-439f-9c95-0d2e4bde60be name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:39 functional-261311 crio[9936]: time="2025-12-12T20:49:39.268081389Z" level=info msg="Neither image nor artfiact localhost/kicbase/echo-server:functional-261311 found" id=6b6c5f17-032b-439f-9c95-0d2e4bde60be name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391094042Z" level=info msg="Checking image status: kicbase/echo-server:functional-261311" id=a659ef62-8ef4-4234-8cb9-e0c72f1e0d92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391367521Z" level=info msg="Resolving \"kicbase/echo-server\" using unqualified-search registries (/etc/containers/registries.conf.d/crio.conf)"
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391429118Z" level=info msg="Image kicbase/echo-server:functional-261311 not found" id=a659ef62-8ef4-4234-8cb9-e0c72f1e0d92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.391507592Z" level=info msg="Neither image nor artfiact kicbase/echo-server:functional-261311 found" id=a659ef62-8ef4-4234-8cb9-e0c72f1e0d92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.41766066Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-261311" id=8c5057e6-8710-4d7f-a6ae-b917d5fe72f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.417803824Z" level=info msg="Image docker.io/kicbase/echo-server:functional-261311 not found" id=8c5057e6-8710-4d7f-a6ae-b917d5fe72f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.417843972Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:functional-261311 found" id=8c5057e6-8710-4d7f-a6ae-b917d5fe72f2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 20:49:42 functional-261311 crio[9936]: time="2025-12-12T20:49:42.446745466Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-261311" id=a881d77f-4185-46d5-ac04-be4972d01e28 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1212 20:49:44.926639   23936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:44.927190   23936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:44.928704   23936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:44.929215   23936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1212 20:49:44.930739   23936 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 20:49:44 up  3:32,  0 user,  load average: 0.60, 0.32, 0.53
	Linux functional-261311 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 20:49:42 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:42 functional-261311 kubelet[23748]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:42 functional-261311 kubelet[23748]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:42 functional-261311 kubelet[23748]: E1212 20:49:42.782796   23748 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:42 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:42 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:43 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1143.
	Dec 12 20:49:43 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:43 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:43 functional-261311 kubelet[23791]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:43 functional-261311 kubelet[23791]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:43 functional-261311 kubelet[23791]: E1212 20:49:43.521383   23791 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:43 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:43 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:44 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1144.
	Dec 12 20:49:44 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:44 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:44 functional-261311 kubelet[23845]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:44 functional-261311 kubelet[23845]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 20:49:44 functional-261311 kubelet[23845]: E1212 20:49:44.259504   23845 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 20:49:44 functional-261311 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 20:49:44 functional-261311 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 20:49:44 functional-261311 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1145.
	Dec 12 20:49:44 functional-261311 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 20:49:44 functional-261311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-261311 -n functional-261311: exit status 2 (430.29022ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-261311" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (1.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1212 20:47:35.368713  417824 out.go:360] Setting OutFile to fd 1 ...
I1212 20:47:35.368854  417824 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:47:35.368866  417824 out.go:374] Setting ErrFile to fd 2...
I1212 20:47:35.368873  417824 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:47:35.369169  417824 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:47:35.369477  417824 mustload.go:66] Loading cluster: functional-261311
I1212 20:47:35.369963  417824 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:47:35.370539  417824 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:47:35.392276  417824 host.go:66] Checking if "functional-261311" exists ...
I1212 20:47:35.392649  417824 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 20:47:35.505214  417824 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:47:35.494738347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 20:47:35.505321  417824 api_server.go:166] Checking apiserver status ...
I1212 20:47:35.505381  417824 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1212 20:47:35.505423  417824 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:47:35.542145  417824 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
W1212 20:47:35.670853  417824 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1212 20:47:35.674171  417824 out.go:179] * The control-plane node functional-261311 apiserver is not running: (state=Stopped)
I1212 20:47:35.680616  417824 out.go:179]   To start a cluster, run: "minikube start -p functional-261311"

                                                
                                                
stdout: * The control-plane node functional-261311 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-261311"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 417823: os: process already finished
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-261311 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-261311 apply -f testdata/testsvc.yaml: exit status 1 (94.44501ms)

                                                
                                                
** stderr ** 
	error: error validating "testdata/testsvc.yaml": error validating data: failed to download openapi: Get "https://192.168.49.2:8441/openapi/v2?timeout=32s": dial tcp 192.168.49.2:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-261311 apply -f testdata/testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (101.92s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:288: failed to hit nginx at "http://10.111.19.220": Temporary Error: Get "http://10.111.19.220": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
functional_test_tunnel_test.go:290: (dbg) Run:  kubectl --context functional-261311 get svc nginx-svc
functional_test_tunnel_test.go:290: (dbg) Non-zero exit: kubectl --context functional-261311 get svc nginx-svc: exit status 1 (63.076111ms)

                                                
                                                
** stderr ** 
	The connection to the server 192.168.49.2:8441 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:292: kubectl --context functional-261311 get svc nginx-svc failed: exit status 1
functional_test_tunnel_test.go:294: failed to kubectl get svc nginx-svc:
functional_test_tunnel_test.go:301: expected body to contain "Welcome to nginx!", but got *""*
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (101.92s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-261311 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-261311 create deployment hello-node --image kicbase/echo-server: exit status 1 (54.603698ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://192.168.49.2:8441/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-261311 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 service list: exit status 103 (257.749975ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-261311 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-261311"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-linux-arm64 -p functional-261311 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-261311 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-261311\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 service list -o json: exit status 103 (264.86411ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-261311 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-261311"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-linux-arm64 -p functional-261311 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 service --namespace=default --https --url hello-node: exit status 103 (262.125852ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-261311 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-261311"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-linux-arm64 -p functional-261311 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 service hello-node --url --format={{.IP}}: exit status 103 (250.126799ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-261311 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-261311"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-arm64 -p functional-261311 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-261311 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-261311\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 service hello-node --url: exit status 103 (264.43731ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-261311 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-261311"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-arm64 -p functional-261311 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-261311 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-261311"
functional_test.go:1579: failed to parse "* The control-plane node functional-261311 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-261311\"": parse "* The control-plane node functional-261311 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-261311\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765572565314390121" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765572565314390121" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765572565314390121" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001/test-1765572565314390121
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (377.313417ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:49:25.691981  364853 retry.go:31] will retry after 300.169961ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 20:49 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 20:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 20:49 test-1765572565314390121
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh cat /mount-9p/test-1765572565314390121
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-261311 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:148: (dbg) Non-zero exit: kubectl --context functional-261311 replace --force -f testdata/busybox-mount-test.yaml: exit status 1 (60.738824ms)

                                                
                                                
** stderr ** 
	error: error when deleting "testdata/busybox-mount-test.yaml": Delete "https://192.168.49.2:8441/api/v1/namespaces/default/pods/busybox-mount": dial tcp 192.168.49.2:8441: connect: connection refused

                                                
                                                
** /stderr **
functional_test_mount_test.go:150: failed to 'kubectl replace' for busybox-mount-test. args "kubectl --context functional-261311 replace --force -f testdata/busybox-mount-test.yaml" : exit status 1
functional_test_mount_test.go:80: "TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port" failed, getting debug info...
functional_test_mount_test.go:81: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:81: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (307.610674ms)

                                                
                                                
-- stdout --
	192.168.49.1 on /mount-9p type 9p (rw,relatime,sync,dirsync,dfltuid=1000,dfltgid=997,access=any,msize=262144,trans=tcp,noextend,port=38403)
	total 2
	-rw-r--r-- 1 docker docker 24 Dec 12 20:49 created-by-test
	-rw-r--r-- 1 docker docker 24 Dec 12 20:49 created-by-test-removed-by-pod
	-rw-r--r-- 1 docker docker 24 Dec 12 20:49 test-1765572565314390121
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:83: debugging command "out/minikube-linux-arm64 -p functional-261311 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001:/mount-9p --alsologtostderr -v=1] ...
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001:/mount-9p --alsologtostderr -v=1] stdout:
* Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:38403
* Userspace file server: 
ufs starting
* Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001 to /mount-9p

                                                
                                                
* NOTE: This process must stay alive for the mount to be accessible ...
* Unmounting /mount-9p ...

                                                
                                                

                                                
                                                
functional_test_mount_test.go:94: (dbg) [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001:/mount-9p --alsologtostderr -v=1] stderr:
I1212 20:49:25.367232  420123 out.go:360] Setting OutFile to fd 1 ...
I1212 20:49:25.367385  420123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:25.367393  420123 out.go:374] Setting ErrFile to fd 2...
I1212 20:49:25.367397  420123 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:25.367631  420123 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:49:25.367970  420123 mustload.go:66] Loading cluster: functional-261311
I1212 20:49:25.368363  420123 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:25.368913  420123 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:49:25.390974  420123 host.go:66] Checking if "functional-261311" exists ...
I1212 20:49:25.391323  420123 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1212 20:49:25.469730  420123 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:25.447325617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I1212 20:49:25.469899  420123 cli_runner.go:164] Run: docker network inspect functional-261311 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 20:49:25.543251  420123 out.go:179] * Mounting host path /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001 into VM as /mount-9p ...
I1212 20:49:25.546450  420123 out.go:179]   - Mount type:   9p
I1212 20:49:25.549521  420123 out.go:179]   - User ID:      docker
I1212 20:49:25.552344  420123 out.go:179]   - Group ID:     docker
I1212 20:49:25.555228  420123 out.go:179]   - Version:      9p2000.L
I1212 20:49:25.558059  420123 out.go:179]   - Message Size: 262144
I1212 20:49:25.560889  420123 out.go:179]   - Options:      map[]
I1212 20:49:25.563714  420123 out.go:179]   - Bind Address: 192.168.49.1:38403
I1212 20:49:25.566612  420123 out.go:179] * Userspace file server: 
I1212 20:49:25.566954  420123 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 20:49:25.567054  420123 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:49:25.600601  420123 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
I1212 20:49:25.707472  420123 mount.go:180] unmount for /mount-9p ran successfully
I1212 20:49:25.707498  420123 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1212 20:49:25.716119  420123 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=38403,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1212 20:49:25.726684  420123 main.go:127] stdlog: ufs.go:141 connected
I1212 20:49:25.726849  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tversion tag 65535 msize 262144 version '9P2000.L'
I1212 20:49:25.726896  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rversion tag 65535 msize 262144 version '9P2000'
I1212 20:49:25.727122  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tattach tag 0 fid 0 afid 4294967295 uname 'nobody' nuname 0 aname ''
I1212 20:49:25.727178  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rattach tag 0 aqid (4431c 1453593e 'd')
I1212 20:49:25.727951  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 0
I1212 20:49:25.728006  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431c 1453593e 'd') m d775 at 0 mt 1765572565 l 4096 t 0 d 0 ext )
I1212 20:49:25.731216  420123 lock.go:50] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/.mount-process: {Name:mkc285fc443f3b32a05c7e3c17cc2b31777c5270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 20:49:25.731442  420123 mount.go:105] mount successful: ""
I1212 20:49:25.734865  420123 out.go:179] * Successfully mounted /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1794532786/001 to /mount-9p
I1212 20:49:25.737714  420123 out.go:203] 
I1212 20:49:25.740421  420123 out.go:179] * NOTE: This process must stay alive for the mount to be accessible ...
I1212 20:49:26.524813  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 0
I1212 20:49:26.524908  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431c 1453593e 'd') m d775 at 0 mt 1765572565 l 4096 t 0 d 0 ext )
I1212 20:49:26.525269  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 1 
I1212 20:49:26.525305  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 
I1212 20:49:26.525453  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Topen tag 0 fid 1 mode 0
I1212 20:49:26.525505  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Ropen tag 0 qid (4431c 1453593e 'd') iounit 0
I1212 20:49:26.525663  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 0
I1212 20:49:26.525705  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431c 1453593e 'd') m d775 at 0 mt 1765572565 l 4096 t 0 d 0 ext )
I1212 20:49:26.525868  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 0 count 262120
I1212 20:49:26.525980  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 258
I1212 20:49:26.526121  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 258 count 261862
I1212 20:49:26.526151  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:26.526270  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:49:26.526297  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:26.526421  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 20:49:26.526478  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 (4431d 1453593e '') 
I1212 20:49:26.526584  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:26.526618  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.526742  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:26.526773  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.526894  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 2
I1212 20:49:26.526932  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:26.527064  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 2 0:'test-1765572565314390121' 
I1212 20:49:26.527098  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 (4431f 1453593e '') 
I1212 20:49:26.527214  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:26.527246  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('test-1765572565314390121' 'jenkins' 'jenkins' '' q (4431f 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.527364  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:26.527395  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('test-1765572565314390121' 'jenkins' 'jenkins' '' q (4431f 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.527502  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 2
I1212 20:49:26.527525  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:26.527645  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 20:49:26.527684  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 (4431e 1453593e '') 
I1212 20:49:26.527798  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:26.527832  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.527949  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:26.527989  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.528110  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 2
I1212 20:49:26.528147  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:26.528271  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:49:26.528298  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:26.528449  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 1
I1212 20:49:26.528482  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:26.808534  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 1 0:'test-1765572565314390121' 
I1212 20:49:26.808606  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 (4431f 1453593e '') 
I1212 20:49:26.808856  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 1
I1212 20:49:26.808908  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('test-1765572565314390121' 'jenkins' 'jenkins' '' q (4431f 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.809052  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 1 newfid 2 
I1212 20:49:26.809082  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 
I1212 20:49:26.809212  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Topen tag 0 fid 2 mode 0
I1212 20:49:26.809269  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Ropen tag 0 qid (4431f 1453593e '') iounit 0
I1212 20:49:26.809400  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 1
I1212 20:49:26.809434  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('test-1765572565314390121' 'jenkins' 'jenkins' '' q (4431f 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:26.809578  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 2 offset 0 count 262120
I1212 20:49:26.809618  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 24
I1212 20:49:26.809737  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 2 offset 24 count 262120
I1212 20:49:26.809782  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:26.809929  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 2 offset 24 count 262120
I1212 20:49:26.809975  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:26.810242  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 2
I1212 20:49:26.810278  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:26.810421  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 1
I1212 20:49:26.810445  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:27.179545  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 0
I1212 20:49:27.179624  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431c 1453593e 'd') m d775 at 0 mt 1765572565 l 4096 t 0 d 0 ext )
I1212 20:49:27.179998  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 1 
I1212 20:49:27.180043  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 
I1212 20:49:27.180219  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Topen tag 0 fid 1 mode 0
I1212 20:49:27.180279  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Ropen tag 0 qid (4431c 1453593e 'd') iounit 0
I1212 20:49:27.180447  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 0
I1212 20:49:27.180519  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('001' 'jenkins' 'jenkins' '' q (4431c 1453593e 'd') m d775 at 0 mt 1765572565 l 4096 t 0 d 0 ext )
I1212 20:49:27.180701  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 0 count 262120
I1212 20:49:27.180802  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 258
I1212 20:49:27.180938  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 258 count 261862
I1212 20:49:27.180967  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:27.181097  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:49:27.181127  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:27.181257  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 2 0:'created-by-test' 
I1212 20:49:27.181303  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 (4431d 1453593e '') 
I1212 20:49:27.181429  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:27.181472  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:27.181598  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:27.181633  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test' 'jenkins' 'jenkins' '' q (4431d 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:27.181780  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 2
I1212 20:49:27.181803  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:27.181948  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 2 0:'test-1765572565314390121' 
I1212 20:49:27.181983  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 (4431f 1453593e '') 
I1212 20:49:27.182112  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:27.182148  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('test-1765572565314390121' 'jenkins' 'jenkins' '' q (4431f 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:27.182269  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:27.182299  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('test-1765572565314390121' 'jenkins' 'jenkins' '' q (4431f 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:27.182434  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 2
I1212 20:49:27.182459  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:27.182590  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 2 0:'created-by-test-removed-by-pod' 
I1212 20:49:27.182649  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rwalk tag 0 (4431e 1453593e '') 
I1212 20:49:27.182793  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:27.182860  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:27.182982  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tstat tag 0 fid 2
I1212 20:49:27.183024  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rstat tag 0 st ('created-by-test-removed-by-pod' 'jenkins' 'jenkins' '' q (4431e 1453593e '') m 644 at 0 mt 1765572565 l 24 t 0 d 0 ext )
I1212 20:49:27.183144  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 2
I1212 20:49:27.183168  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:27.183295  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tread tag 0 fid 1 offset 258 count 262120
I1212 20:49:27.183327  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rread tag 0 count 0
I1212 20:49:27.183472  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 1
I1212 20:49:27.183503  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:27.184712  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Twalk tag 0 fid 0 newfid 1 0:'pod-dates' 
I1212 20:49:27.184786  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rerror tag 0 ename 'file not found' ecode 0
I1212 20:49:27.458645  420123 main.go:127] stdlog: srv_conn.go:133 >>> 192.168.49.2:35496 Tclunk tag 0 fid 0
I1212 20:49:27.458699  420123 main.go:127] stdlog: srv_conn.go:190 <<< 192.168.49.2:35496 Rclunk tag 0
I1212 20:49:27.459827  420123 main.go:127] stdlog: ufs.go:147 disconnected
I1212 20:49:27.479806  420123 out.go:179] * Unmounting /mount-9p ...
I1212 20:49:27.482736  420123 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f -l /mount-9p || echo "
I1212 20:49:27.489910  420123 mount.go:180] unmount for /mount-9p ran successfully
I1212 20:49:27.490030  420123 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/.mount-process: {Name:mkc285fc443f3b32a05c7e3c17cc2b31777c5270 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1212 20:49:27.493103  420123 out.go:203] 
W1212 20:49:27.496074  420123 out.go:285] X Exiting due to MK_INTERRUPTED: Received terminated signal
X Exiting due to MK_INTERRUPTED: Received terminated signal
I1212 20:49:27.498981  420123 out.go:203] 
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/any-port (2.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (507.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 stop --alsologtostderr -v 5
E1212 20:57:35.804793  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:57:39.904887  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 stop --alsologtostderr -v 5: (37.569373416s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 start --wait true --alsologtostderr -v 5
E1212 20:57:44.061076  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:58:03.513479  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:59:36.832155  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:02:35.805231  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:02:44.061627  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:04:36.831917  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-008703 start --wait true --alsologtostderr -v 5: exit status 80 (7m47.585170229s)

                                                
                                                
-- stdout --
	* [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	* Enabled addons: 
	
	* Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:57:42.443959  444203 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:57:42.444139  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444170  444203 out.go:374] Setting ErrFile to fd 2...
	I1212 20:57:42.444190  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444488  444203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:57:42.444894  444203 out.go:368] Setting JSON to false
	I1212 20:57:42.445764  444203 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13215,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:57:42.445866  444203 start.go:143] virtualization:  
	I1212 20:57:42.448973  444203 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:57:42.452845  444203 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:57:42.452922  444203 notify.go:221] Checking for updates...
	I1212 20:57:42.458690  444203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:57:42.461546  444203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:42.464549  444203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:57:42.467438  444203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:57:42.470311  444203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:57:42.473663  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:42.473791  444203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:57:42.502175  444203 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:57:42.502305  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.567154  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.556873235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.567281  444203 docker.go:319] overlay module found
	I1212 20:57:42.570683  444203 out.go:179] * Using the docker driver based on existing profile
	I1212 20:57:42.573609  444203 start.go:309] selected driver: docker
	I1212 20:57:42.573638  444203 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.573801  444203 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:57:42.573920  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.631794  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.621825898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.632218  444203 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:57:42.632254  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:42.632316  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:42.632425  444203 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.635654  444203 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 20:57:42.638374  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:42.641273  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:42.644097  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:42.644143  444203 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 20:57:42.644156  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:42.644194  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:42.644262  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:42.644272  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:42.644440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.664350  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:42.664409  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:42.664432  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:42.664465  444203 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:42.664534  444203 start.go:364] duration metric: took 45.473µs to acquireMachinesLock for "ha-008703"
	I1212 20:57:42.664558  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:42.664567  444203 fix.go:54] fixHost starting: 
	I1212 20:57:42.664830  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.682444  444203 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 20:57:42.682482  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:42.687702  444203 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 20:57:42.687806  444203 cli_runner.go:164] Run: docker start ha-008703
	I1212 20:57:42.929392  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.950691  444203 kic.go:430] container "ha-008703" state is running.
	I1212 20:57:42.951124  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:42.975911  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.976159  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:42.976233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:43.000950  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:43.001319  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:43.001348  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:43.002175  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:46.155930  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.155957  444203 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 20:57:46.156028  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.174281  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.174613  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.174631  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 20:57:46.334176  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.334256  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.353092  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.353419  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.353444  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:46.504764  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:46.504855  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:46.504906  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:46.504931  444203 provision.go:84] configureAuth start
	I1212 20:57:46.505018  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:46.522153  444203 provision.go:143] copyHostCerts
	I1212 20:57:46.522196  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522237  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:46.522245  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522321  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:46.522414  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522431  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:46.522435  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522464  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:46.522512  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522532  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:46.522536  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522563  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:46.522618  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 20:57:46.651816  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:46.651886  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:46.651968  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.671188  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:46.776309  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:46.776386  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:46.794675  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:46.794741  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 20:57:46.813024  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:46.813085  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:46.830950  444203 provision.go:87] duration metric: took 325.983006ms to configureAuth
	I1212 20:57:46.830977  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:46.831235  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:46.831340  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.848478  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.848794  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.848812  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:47.235920  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:47.235994  444203 machine.go:97] duration metric: took 4.259816851s to provisionDockerMachine
	I1212 20:57:47.236020  444203 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 20:57:47.236048  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:47.236157  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:47.236233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.261608  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.368446  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:47.372121  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:47.372152  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:47.372170  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:47.372227  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:47.372309  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:47.372320  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:47.372447  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:47.380725  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:47.398959  444203 start.go:296] duration metric: took 162.907605ms for postStartSetup
	I1212 20:57:47.399064  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:47.399134  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.420756  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.525530  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:47.530321  444203 fix.go:56] duration metric: took 4.865746757s for fixHost
	I1212 20:57:47.530348  444203 start.go:83] releasing machines lock for "ha-008703", held for 4.865800567s
	I1212 20:57:47.530419  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:47.548629  444203 ssh_runner.go:195] Run: cat /version.json
	I1212 20:57:47.548688  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.548950  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:47.549003  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.573240  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.580519  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.676043  444203 ssh_runner.go:195] Run: systemctl --version
	I1212 20:57:47.771712  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:47.808898  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:47.813508  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:47.813590  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:47.821723  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:47.821748  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:47.821827  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:47.821894  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:47.837549  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:47.851337  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:47.851435  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:47.867827  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:47.881469  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:47.990806  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:48.117810  444203 docker.go:234] disabling docker service ...
	I1212 20:57:48.117891  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:48.133641  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:48.146962  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:48.263631  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:48.385870  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:48.400502  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:48.415928  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:48.415999  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.425436  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:48.425516  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.434622  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.443654  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.452998  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:48.462000  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.471517  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.480019  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.488892  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:48.501776  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:48.509429  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:48.636874  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:57:48.831677  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:57:48.831797  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:57:48.835749  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:57:48.835860  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:57:48.839496  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:57:48.865845  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:57:48.865936  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.896176  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.926063  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:57:48.928824  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:57:48.945819  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:57:48.949721  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:48.960274  444203 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:57:48.960470  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:48.960528  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:48.995177  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:48.995203  444203 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:57:48.995261  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:49.022349  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:49.022375  444203 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:57:49.022384  444203 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 20:57:49.022522  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:57:49.022613  444203 ssh_runner.go:195] Run: crio config
	I1212 20:57:49.094808  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:49.094833  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:49.094884  444203 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:57:49.094931  444203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:57:49.095072  444203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:57:49.095097  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:57:49.095151  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:57:49.107313  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:49.107428  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:57:49.107499  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:57:49.115345  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:57:49.115415  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 20:57:49.123505  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 20:57:49.136430  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:57:49.149479  444203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 20:57:49.163560  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:57:49.176571  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:57:49.180272  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:49.190686  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:49.306812  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:57:49.322473  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 20:57:49.322495  444203 certs.go:195] generating shared ca certs ...
	I1212 20:57:49.322510  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.322646  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:57:49.322706  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:57:49.322721  444203 certs.go:257] generating profile certs ...
	I1212 20:57:49.322803  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:57:49.322831  444203 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 20:57:49.322854  444203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1212 20:57:49.472738  444203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 ...
	I1212 20:57:49.472774  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904: {Name:mk2a5379bc5668a2307c7e3ec981ab026dda45c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.472981  444203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 ...
	I1212 20:57:49.473001  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904: {Name:mk9431140de21966b13bcbc9ba3792a6b7192788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.473093  444203 certs.go:382] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt
	I1212 20:57:49.473241  444203 certs.go:386] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key
	I1212 20:57:49.473382  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:57:49.473401  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:57:49.473419  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:57:49.473436  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:57:49.473449  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:57:49.473464  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:57:49.473478  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:57:49.473493  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:57:49.473504  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:57:49.473559  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:57:49.473598  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:57:49.473610  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:57:49.473644  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:57:49.473680  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:57:49.473711  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:57:49.473759  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:49.473803  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.473819  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.473830  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.474446  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:57:49.501229  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:57:49.522107  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:57:49.550223  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:57:49.582434  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:57:49.603191  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:57:49.623340  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:57:49.644021  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:57:49.665040  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:57:49.686900  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:57:49.705561  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:57:49.723829  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:57:49.737136  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:57:49.743571  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.751112  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:57:49.759507  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763360  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763427  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.804630  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:57:49.811926  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.819270  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:57:49.826837  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830838  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830912  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.872515  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:57:49.880250  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.887711  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:57:49.895442  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899072  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899140  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.940560  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:57:49.948269  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:57:49.952111  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:57:49.994329  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:57:50.049087  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:57:50.098831  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:57:50.155411  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:57:50.252310  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:57:50.338780  444203 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:50.338978  444203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:57:50.339069  444203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:57:50.398773  444203 cri.go:89] found id: "8df671b2f67c1fea6933eed59bb0ed038b61ceb87afc6b29bfda67eb56bf94c5"
	I1212 20:57:50.398830  444203 cri.go:89] found id: "153af1b54c51e5f4602a99ee68deeb035520f031d6275c686a4d837adf8c7a9b"
	I1212 20:57:50.398851  444203 cri.go:89] found id: "afc1929ca6e740de8c3a64acc626b0e59ca06f13bd451285650a7214808d9608"
	I1212 20:57:50.398870  444203 cri.go:89] found id: "3df9e833b1b81ce05c8ed6dff7db997b5fe66bf67be14061cdbe13efd2dd87cf"
	I1212 20:57:50.398889  444203 cri.go:89] found id: "d1a55d9c86371ac0863607a8786cbe02fed629a5326460325861f8f7188e31b3"
	I1212 20:57:50.398924  444203 cri.go:89] found id: ""
	I1212 20:57:50.399008  444203 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:57:50.427490  444203 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:57:50Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:57:50.427620  444203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:57:50.436409  444203 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:57:50.436480  444203 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:57:50.436565  444203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:57:50.450254  444203 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:50.450723  444203 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.450865  444203 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 20:57:50.451184  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.451770  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:57:50.452602  444203 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:57:50.452649  444203 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:57:50.452671  444203 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:57:50.452699  444203 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:57:50.452724  444203 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:57:50.452669  444203 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:57:50.453064  444203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:57:50.471433  444203 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:57:50.471502  444203 kubeadm.go:602] duration metric: took 34.98508ms to restartPrimaryControlPlane
	I1212 20:57:50.471528  444203 kubeadm.go:403] duration metric: took 132.757161ms to StartCluster
	I1212 20:57:50.471560  444203 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.471649  444203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.472264  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.472602  444203 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:57:50.472664  444203 start.go:242] waiting for startup goroutines ...
	I1212 20:57:50.472701  444203 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:57:50.473166  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.478696  444203 out.go:179] * Enabled addons: 
	I1212 20:57:50.481795  444203 addons.go:530] duration metric: took 9.096965ms for enable addons: enabled=[]
	I1212 20:57:50.481888  444203 start.go:247] waiting for cluster config update ...
	I1212 20:57:50.481913  444203 start.go:256] writing updated cluster config ...
	I1212 20:57:50.485267  444203 out.go:203] 
	I1212 20:57:50.488653  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.488812  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.492075  444203 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 20:57:50.494987  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:50.498206  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:50.501052  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:50.501113  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:50.501125  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:50.501268  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:50.501295  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:50.501440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.539828  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:50.539849  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:50.539866  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:50.539902  444203 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:50.539970  444203 start.go:364] duration metric: took 48.32µs to acquireMachinesLock for "ha-008703-m02"
	I1212 20:57:50.539997  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:50.540008  444203 fix.go:54] fixHost starting: m02
	I1212 20:57:50.540289  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:50.570630  444203 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 20:57:50.570662  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:50.573920  444203 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 20:57:50.574010  444203 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 20:57:51.021435  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:51.051445  444203 kic.go:430] container "ha-008703-m02" state is running.
	I1212 20:57:51.051835  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:51.081868  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:51.082129  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:51.082189  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:51.114065  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:51.114398  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:51.114407  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:51.115163  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:54.335915  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.335981  444203 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 20:57:54.336094  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.365312  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.365660  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.365676  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 20:57:54.750173  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.750344  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.784610  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.784933  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.784950  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:55.052390  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:55.052421  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:55.052440  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:55.052459  444203 provision.go:84] configureAuth start
	I1212 20:57:55.052553  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:55.106212  444203 provision.go:143] copyHostCerts
	I1212 20:57:55.106261  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106295  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:55.106307  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106385  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:55.106475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106498  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:55.106503  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106533  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:55.106577  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106598  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:55.106605  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106631  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:55.106681  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 20:57:55.315977  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:55.316047  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:55.316093  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.334254  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:55.478383  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:55.478451  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:57:55.517393  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:55.517463  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:55.542182  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:55.542251  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:55.571975  444203 provision.go:87] duration metric: took 519.496148ms to configureAuth
	I1212 20:57:55.572013  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:55.572281  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:55.572439  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.601112  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:55.601422  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:55.601436  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:56.060871  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:56.060947  444203 machine.go:97] duration metric: took 4.978806446s to provisionDockerMachine
	I1212 20:57:56.060977  444203 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 20:57:56.061019  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:56.061131  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:56.061204  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.079622  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.188393  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:56.191735  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:56.191761  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:56.191773  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:56.191830  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:56.191915  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:56.191925  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:56.192023  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:56.199559  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:56.216617  444203 start.go:296] duration metric: took 155.610404ms for postStartSetup
	I1212 20:57:56.216698  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:56.216740  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.233309  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.337931  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:56.342965  444203 fix.go:56] duration metric: took 5.802950492s for fixHost
	I1212 20:57:56.342991  444203 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.803007207s
	I1212 20:57:56.343061  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:56.364818  444203 out.go:179] * Found network options:
	I1212 20:57:56.367652  444203 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 20:57:56.370401  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 20:57:56.370443  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 20:57:56.370511  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:56.370552  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.370593  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:56.370646  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.391626  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.398057  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.575929  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:56.710881  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:56.710966  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:56.722145  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:56.722217  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:56.722266  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:56.722342  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:56.742981  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:56.765595  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:56.765706  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:56.793166  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:56.814044  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:57.024630  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:57.240003  444203 docker.go:234] disabling docker service ...
	I1212 20:57:57.240088  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:57.260709  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:57.276845  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:57.490011  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:57.701011  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:57.718231  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:57.734672  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:57.734758  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.752791  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:57.752868  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.767185  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.783487  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.798836  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:57.808080  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.821261  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.835565  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.848412  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:57.861550  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:57.870875  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:58.097322  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:59:28.418240  444203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320839757s)
	I1212 20:59:28.418266  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:59:28.418318  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:59:28.421907  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:59:28.421970  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:59:28.425474  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:59:28.451137  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:59:28.451224  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.487374  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.523846  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:59:28.527097  444203 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 20:59:28.530093  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:59:28.546578  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:59:28.550700  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:28.561522  444203 mustload.go:66] Loading cluster: ha-008703
	I1212 20:59:28.561768  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:28.562034  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:59:28.579699  444203 host.go:66] Checking if "ha-008703" exists ...
	I1212 20:59:28.579981  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 20:59:28.579989  444203 certs.go:195] generating shared ca certs ...
	I1212 20:59:28.580003  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:59:28.580127  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:59:28.580165  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:59:28.580173  444203 certs.go:257] generating profile certs ...
	I1212 20:59:28.580247  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:59:28.580315  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 20:59:28.580355  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:59:28.580363  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:59:28.580407  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:59:28.580418  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:59:28.580430  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:59:28.580441  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:59:28.580452  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:59:28.580465  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:59:28.580475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:59:28.580526  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:59:28.580557  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:59:28.580565  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:59:28.580591  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:59:28.580614  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:59:28.580640  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:59:28.580684  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:59:28.580713  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:59:28.580727  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:59:28.580738  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:28.580791  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:59:28.597816  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:59:28.696708  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 20:59:28.700659  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 20:59:28.709283  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 20:59:28.713481  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 20:59:28.721707  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 20:59:28.725369  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 20:59:28.733654  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 20:59:28.737443  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 20:59:28.745834  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 20:59:28.749617  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 20:59:28.758164  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 20:59:28.761831  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 20:59:28.770067  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:59:28.787610  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:59:28.806372  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:59:28.824957  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:59:28.844568  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:59:28.863238  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:59:28.881382  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:59:28.900337  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:59:28.919403  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:59:28.938551  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:59:28.958859  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:59:28.977347  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 20:59:28.998600  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 20:59:29.014406  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 20:59:29.027571  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 20:59:29.040968  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 20:59:29.054581  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 20:59:29.067754  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 20:59:29.080811  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:59:29.087180  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.095114  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:59:29.102755  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106745  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106853  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.152715  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:59:29.160933  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.168533  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:59:29.177095  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181103  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181174  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.222399  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:59:29.233819  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.241844  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:59:29.249788  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254119  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254190  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.295461  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:59:29.303146  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:59:29.307067  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:59:29.350787  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:59:29.392520  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:59:29.433715  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:59:29.474688  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:59:29.516288  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:59:29.557959  444203 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 20:59:29.558056  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:59:29.558087  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:59:29.558148  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:59:29.572235  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:59:29.572334  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:59:29.572441  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:59:29.580681  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:59:29.580751  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 20:59:29.588356  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:59:29.602149  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:59:29.615313  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:59:29.629715  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:59:29.633469  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:29.643261  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.776061  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.790278  444203 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:59:29.790703  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:29.794669  444203 out.go:179] * Verifying Kubernetes components...
	I1212 20:59:29.797306  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.936519  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.950752  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 20:59:29.950831  444203 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 20:59:29.952083  444203 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 20:59:31.953427  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:33.953536  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:36.453558  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:38.952703  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:41.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:43.452691  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:45.952750  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:48.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:50.452746  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:00.954217  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:00:10.954802  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:00:12.960855  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:00:12.961321  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:58400->192.168.49.2:8443: read: connection reset by peer
	W1212 21:00:15.453549  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:17.952657  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:20.453573  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:22.952730  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:24.953541  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:27.452882  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:29.952571  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:32.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:34.953509  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:37.452853  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:39.953131  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:41.953378  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:44.452656  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:46.952721  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:48.952858  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:51.452609  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:53.452824  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:55.952717  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:57.953626  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:08.953781  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:01:18.955065  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:01:20.812078  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:01:21.453435  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:23.952633  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:25.953670  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:28.453604  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:30.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:32.953585  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:34.953661  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:37.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:39.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:42.452830  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:44.952768  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:47.452920  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:49.952685  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:51.953605  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:54.452622  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:56.453648  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:58.952804  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:01.452588  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:03.452926  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:05.952702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:07.952958  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:09.953705  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:12.452917  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:14.952877  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:17.452818  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:19.952741  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:22.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:24.952709  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:27.452855  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:29.952655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:31.952748  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:33.952822  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:36.452695  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:38.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:40.452868  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:42.952779  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:44.952905  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:47.453071  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:49.453482  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:59.953684  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:03:09.954306  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:03:12.758807  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:03:12.759288  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:51776->192.168.49.2:8443: read: connection reset by peer
	W1212 21:03:14.952704  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:17.452843  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:19.952792  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:22.452752  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:24.952700  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:27.452954  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:29.952844  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:32.452952  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:34.953666  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:37.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:39.952664  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:41.952726  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:43.952797  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:45.952870  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:48.452683  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:50.453535  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:52.952772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:55.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:57.452867  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:59.952895  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:02.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:04.952915  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:07.452753  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:09.453637  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:11.952833  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:14.452718  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:16.952636  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:18.953630  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:21.452687  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:23.952770  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:25.952829  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:28.452772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:30.453677  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:32.952813  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:35.452679  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:37.453048  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:39.453453  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:41.952806  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:44.452710  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:46.952744  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:48.952846  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:51.452675  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:53.452999  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:55.952801  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:58.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:00.952662  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:02.952760  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:05.452732  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:07.452887  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:09.952790  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:11.953431  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:14.452702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:16.952708  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:19.452740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:21.453565  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:23.953569  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:25.953740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:28.452736  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1212 21:05:29.952402  444203 node_ready.go:38] duration metric: took 6m0.000280641s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:05:29.955795  444203 out.go:203] 
	W1212 21:05:29.958921  444203 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:05:29.958945  444203 out.go:285] * 
	* 
	W1212 21:05:29.961096  444203 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:05:29.963919  444203 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-arm64 -p ha-008703 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 node list --alsologtostderr -v 5
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 444329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:57:42.720187316Z",
	            "FinishedAt": "2025-12-12T20:57:42.030104403Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "493c43ee23fd8e7f78466c871a302edc137070db11f7e6b5d032ce802f3f0262",
	            "SandboxKey": "/var/run/docker/netns/493c43ee23fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:72:e3:2e:78:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "43672cbb724d118edeacd3584cc29f7251f2a336562cd7d37b8d180ba19da903",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703: exit status 2 (332.747029ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 logs -n 25
helpers_test.go:261: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m03_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt                                                            │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt                                                │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node start m02 --alsologtostderr -v 5                                                                                     │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:57 UTC │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │ 12 Dec 25 20:57 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5                                                                                  │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:57:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:57:42.443959  444203 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:57:42.444139  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444170  444203 out.go:374] Setting ErrFile to fd 2...
	I1212 20:57:42.444190  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444488  444203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:57:42.444894  444203 out.go:368] Setting JSON to false
	I1212 20:57:42.445764  444203 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13215,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:57:42.445866  444203 start.go:143] virtualization:  
	I1212 20:57:42.448973  444203 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:57:42.452845  444203 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:57:42.452922  444203 notify.go:221] Checking for updates...
	I1212 20:57:42.458690  444203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:57:42.461546  444203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:42.464549  444203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:57:42.467438  444203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:57:42.470311  444203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:57:42.473663  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:42.473791  444203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:57:42.502175  444203 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:57:42.502305  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.567154  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.556873235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.567281  444203 docker.go:319] overlay module found
	I1212 20:57:42.570683  444203 out.go:179] * Using the docker driver based on existing profile
	I1212 20:57:42.573609  444203 start.go:309] selected driver: docker
	I1212 20:57:42.573638  444203 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.573801  444203 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:57:42.573920  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.631794  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.621825898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.632218  444203 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:57:42.632254  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:42.632316  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:42.632425  444203 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.635654  444203 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 20:57:42.638374  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:42.641273  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:42.644097  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:42.644143  444203 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 20:57:42.644156  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:42.644194  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:42.644262  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:42.644272  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:42.644440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.664350  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:42.664409  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:42.664432  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:42.664465  444203 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:42.664534  444203 start.go:364] duration metric: took 45.473µs to acquireMachinesLock for "ha-008703"
	I1212 20:57:42.664558  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:42.664567  444203 fix.go:54] fixHost starting: 
	I1212 20:57:42.664830  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.682444  444203 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 20:57:42.682482  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:42.687702  444203 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 20:57:42.687806  444203 cli_runner.go:164] Run: docker start ha-008703
	I1212 20:57:42.929392  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.950691  444203 kic.go:430] container "ha-008703" state is running.
	I1212 20:57:42.951124  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:42.975911  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.976159  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:42.976233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:43.000950  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:43.001319  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:43.001348  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:43.002175  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:46.155930  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.155957  444203 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 20:57:46.156028  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.174281  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.174613  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.174631  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 20:57:46.334176  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.334256  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.353092  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.353419  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.353444  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:46.504764  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:46.504855  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:46.504906  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:46.504931  444203 provision.go:84] configureAuth start
	I1212 20:57:46.505018  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:46.522153  444203 provision.go:143] copyHostCerts
	I1212 20:57:46.522196  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522237  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:46.522245  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522321  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:46.522414  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522431  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:46.522435  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522464  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:46.522512  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522532  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:46.522536  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522563  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:46.522618  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 20:57:46.651816  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:46.651886  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:46.651968  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.671188  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:46.776309  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:46.776386  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:46.794675  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:46.794741  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 20:57:46.813024  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:46.813085  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:46.830950  444203 provision.go:87] duration metric: took 325.983006ms to configureAuth
	I1212 20:57:46.830977  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:46.831235  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:46.831340  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.848478  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.848794  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.848812  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:47.235920  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:47.235994  444203 machine.go:97] duration metric: took 4.259816851s to provisionDockerMachine
	I1212 20:57:47.236020  444203 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 20:57:47.236048  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:47.236157  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:47.236233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.261608  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.368446  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:47.372121  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:47.372152  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:47.372170  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:47.372227  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:47.372309  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:47.372320  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:47.372447  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:47.380725  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:47.398959  444203 start.go:296] duration metric: took 162.907605ms for postStartSetup
	I1212 20:57:47.399064  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:47.399134  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.420756  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.525530  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:47.530321  444203 fix.go:56] duration metric: took 4.865746757s for fixHost
	I1212 20:57:47.530348  444203 start.go:83] releasing machines lock for "ha-008703", held for 4.865800567s
	I1212 20:57:47.530419  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:47.548629  444203 ssh_runner.go:195] Run: cat /version.json
	I1212 20:57:47.548688  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.548950  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:47.549003  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.573240  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.580519  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.676043  444203 ssh_runner.go:195] Run: systemctl --version
	I1212 20:57:47.771712  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:47.808898  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:47.813508  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:47.813590  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:47.821723  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:47.821748  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:47.821827  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:47.821894  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:47.837549  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:47.851337  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:47.851435  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:47.867827  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:47.881469  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:47.990806  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:48.117810  444203 docker.go:234] disabling docker service ...
	I1212 20:57:48.117891  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:48.133641  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:48.146962  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:48.263631  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:48.385870  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:48.400502  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:48.415928  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:48.415999  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.425436  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:48.425516  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.434622  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.443654  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.452998  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:48.462000  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.471517  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.480019  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.488892  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:48.501776  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:48.509429  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:48.636874  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:57:48.831677  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:57:48.831797  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:57:48.835749  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:57:48.835860  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:57:48.839496  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:57:48.865845  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:57:48.865936  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.896176  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.926063  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:57:48.928824  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:57:48.945819  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:57:48.949721  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:48.960274  444203 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:57:48.960470  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:48.960528  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:48.995177  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:48.995203  444203 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:57:48.995261  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:49.022349  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:49.022375  444203 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:57:49.022384  444203 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 20:57:49.022522  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:57:49.022613  444203 ssh_runner.go:195] Run: crio config
	I1212 20:57:49.094808  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:49.094833  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:49.094884  444203 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:57:49.094931  444203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:57:49.095072  444203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:57:49.095097  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:57:49.095151  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:57:49.107313  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:49.107428  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:57:49.107499  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:57:49.115345  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:57:49.115415  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 20:57:49.123505  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 20:57:49.136430  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:57:49.149479  444203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 20:57:49.163560  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:57:49.176571  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:57:49.180272  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:49.190686  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:49.306812  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:57:49.322473  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 20:57:49.322495  444203 certs.go:195] generating shared ca certs ...
	I1212 20:57:49.322510  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.322646  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:57:49.322706  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:57:49.322721  444203 certs.go:257] generating profile certs ...
	I1212 20:57:49.322803  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:57:49.322831  444203 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 20:57:49.322854  444203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1212 20:57:49.472738  444203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 ...
	I1212 20:57:49.472774  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904: {Name:mk2a5379bc5668a2307c7e3ec981ab026dda45c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.472981  444203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 ...
	I1212 20:57:49.473001  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904: {Name:mk9431140de21966b13bcbc9ba3792a6b7192788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.473093  444203 certs.go:382] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt
	I1212 20:57:49.473241  444203 certs.go:386] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key
	I1212 20:57:49.473382  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:57:49.473401  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:57:49.473419  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:57:49.473436  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:57:49.473449  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:57:49.473464  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:57:49.473478  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:57:49.473493  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:57:49.473504  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:57:49.473559  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:57:49.473598  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:57:49.473610  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:57:49.473644  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:57:49.473680  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:57:49.473711  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:57:49.473759  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:49.473803  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.473819  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.473830  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.474446  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:57:49.501229  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:57:49.522107  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:57:49.550223  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:57:49.582434  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:57:49.603191  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:57:49.623340  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:57:49.644021  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:57:49.665040  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:57:49.686900  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:57:49.705561  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:57:49.723829  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:57:49.737136  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:57:49.743571  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.751112  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:57:49.759507  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763360  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763427  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.804630  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:57:49.811926  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.819270  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:57:49.826837  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830838  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830912  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.872515  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:57:49.880250  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.887711  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:57:49.895442  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899072  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899140  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.940560  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:57:49.948269  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:57:49.952111  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:57:49.994329  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:57:50.049087  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:57:50.098831  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:57:50.155411  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:57:50.252310  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:57:50.338780  444203 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:50.338978  444203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:57:50.339069  444203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:57:50.398773  444203 cri.go:89] found id: "8df671b2f67c1fea6933eed59bb0ed038b61ceb87afc6b29bfda67eb56bf94c5"
	I1212 20:57:50.398830  444203 cri.go:89] found id: "153af1b54c51e5f4602a99ee68deeb035520f031d6275c686a4d837adf8c7a9b"
	I1212 20:57:50.398851  444203 cri.go:89] found id: "afc1929ca6e740de8c3a64acc626b0e59ca06f13bd451285650a7214808d9608"
	I1212 20:57:50.398870  444203 cri.go:89] found id: "3df9e833b1b81ce05c8ed6dff7db997b5fe66bf67be14061cdbe13efd2dd87cf"
	I1212 20:57:50.398889  444203 cri.go:89] found id: "d1a55d9c86371ac0863607a8786cbe02fed629a5326460325861f8f7188e31b3"
	I1212 20:57:50.398924  444203 cri.go:89] found id: ""
	I1212 20:57:50.399008  444203 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:57:50.427490  444203 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:57:50Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:57:50.427620  444203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:57:50.436409  444203 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:57:50.436480  444203 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:57:50.436565  444203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:57:50.450254  444203 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:50.450723  444203 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.450865  444203 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 20:57:50.451184  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.451770  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:57:50.452602  444203 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:57:50.452649  444203 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:57:50.452671  444203 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:57:50.452699  444203 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:57:50.452724  444203 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:57:50.452669  444203 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:57:50.453064  444203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:57:50.471433  444203 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:57:50.471502  444203 kubeadm.go:602] duration metric: took 34.98508ms to restartPrimaryControlPlane
	I1212 20:57:50.471528  444203 kubeadm.go:403] duration metric: took 132.757161ms to StartCluster
	I1212 20:57:50.471560  444203 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.471649  444203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.472264  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.472602  444203 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:57:50.472664  444203 start.go:242] waiting for startup goroutines ...
	I1212 20:57:50.472701  444203 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:57:50.473166  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.478696  444203 out.go:179] * Enabled addons: 
	I1212 20:57:50.481795  444203 addons.go:530] duration metric: took 9.096965ms for enable addons: enabled=[]
	I1212 20:57:50.481888  444203 start.go:247] waiting for cluster config update ...
	I1212 20:57:50.481913  444203 start.go:256] writing updated cluster config ...
	I1212 20:57:50.485267  444203 out.go:203] 
	I1212 20:57:50.488653  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.488812  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.492075  444203 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 20:57:50.494987  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:50.498206  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:50.501052  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:50.501113  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:50.501125  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:50.501268  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:50.501295  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:50.501440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.539828  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:50.539849  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:50.539866  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:50.539902  444203 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:50.539970  444203 start.go:364] duration metric: took 48.32µs to acquireMachinesLock for "ha-008703-m02"
	I1212 20:57:50.539997  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:50.540008  444203 fix.go:54] fixHost starting: m02
	I1212 20:57:50.540289  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:50.570630  444203 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 20:57:50.570662  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:50.573920  444203 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 20:57:50.574010  444203 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 20:57:51.021435  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:51.051445  444203 kic.go:430] container "ha-008703-m02" state is running.
	I1212 20:57:51.051835  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:51.081868  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:51.082129  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:51.082189  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:51.114065  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:51.114398  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:51.114407  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:51.115163  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:54.335915  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.335981  444203 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 20:57:54.336094  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.365312  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.365660  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.365676  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 20:57:54.750173  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.750344  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.784610  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.784933  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.784950  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:55.052390  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:55.052421  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:55.052440  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:55.052459  444203 provision.go:84] configureAuth start
	I1212 20:57:55.052553  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:55.106212  444203 provision.go:143] copyHostCerts
	I1212 20:57:55.106261  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106295  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:55.106307  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106385  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:55.106475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106498  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:55.106503  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106533  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:55.106577  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106598  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:55.106605  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106631  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:55.106681  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 20:57:55.315977  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:55.316047  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:55.316093  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.334254  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:55.478383  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:55.478451  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:57:55.517393  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:55.517463  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:55.542182  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:55.542251  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:55.571975  444203 provision.go:87] duration metric: took 519.496148ms to configureAuth
	I1212 20:57:55.572013  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:55.572281  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:55.572439  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.601112  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:55.601422  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:55.601436  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:56.060871  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:56.060947  444203 machine.go:97] duration metric: took 4.978806446s to provisionDockerMachine
	I1212 20:57:56.060977  444203 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 20:57:56.061019  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:56.061131  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:56.061204  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.079622  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.188393  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:56.191735  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:56.191761  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:56.191773  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:56.191830  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:56.191915  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:56.191925  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:56.192023  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:56.199559  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:56.216617  444203 start.go:296] duration metric: took 155.610404ms for postStartSetup
	I1212 20:57:56.216698  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:56.216740  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.233309  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.337931  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:56.342965  444203 fix.go:56] duration metric: took 5.802950492s for fixHost
	I1212 20:57:56.342991  444203 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.803007207s
	I1212 20:57:56.343061  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:56.364818  444203 out.go:179] * Found network options:
	I1212 20:57:56.367652  444203 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 20:57:56.370401  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 20:57:56.370443  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 20:57:56.370511  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:56.370552  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.370593  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:56.370646  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.391626  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.398057  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.575929  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:56.710881  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:56.710966  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:56.722145  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:56.722217  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:56.722266  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:56.722342  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:56.742981  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:56.765595  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:56.765706  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:56.793166  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:56.814044  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:57.024630  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:57.240003  444203 docker.go:234] disabling docker service ...
	I1212 20:57:57.240088  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:57.260709  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:57.276845  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:57.490011  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:57.701011  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:57.718231  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:57.734672  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:57.734758  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.752791  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:57.752868  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.767185  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.783487  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.798836  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:57.808080  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.821261  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.835565  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.848412  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:57.861550  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:57.870875  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:58.097322  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:59:28.418240  444203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320839757s)
	I1212 20:59:28.418266  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:59:28.418318  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:59:28.421907  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:59:28.421970  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:59:28.425474  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:59:28.451137  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:59:28.451224  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.487374  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.523846  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:59:28.527097  444203 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 20:59:28.530093  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:59:28.546578  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:59:28.550700  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:28.561522  444203 mustload.go:66] Loading cluster: ha-008703
	I1212 20:59:28.561768  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:28.562034  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:59:28.579699  444203 host.go:66] Checking if "ha-008703" exists ...
	I1212 20:59:28.579981  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 20:59:28.579989  444203 certs.go:195] generating shared ca certs ...
	I1212 20:59:28.580003  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:59:28.580127  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:59:28.580165  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:59:28.580173  444203 certs.go:257] generating profile certs ...
	I1212 20:59:28.580247  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:59:28.580315  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 20:59:28.580355  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:59:28.580363  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:59:28.580407  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:59:28.580418  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:59:28.580430  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:59:28.580441  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:59:28.580452  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:59:28.580465  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:59:28.580475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:59:28.580526  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:59:28.580557  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:59:28.580565  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:59:28.580591  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:59:28.580614  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:59:28.580640  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:59:28.580684  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:59:28.580713  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:59:28.580727  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:59:28.580738  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:28.580791  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:59:28.597816  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:59:28.696708  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 20:59:28.700659  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 20:59:28.709283  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 20:59:28.713481  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 20:59:28.721707  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 20:59:28.725369  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 20:59:28.733654  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 20:59:28.737443  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 20:59:28.745834  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 20:59:28.749617  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 20:59:28.758164  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 20:59:28.761831  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 20:59:28.770067  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:59:28.787610  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:59:28.806372  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:59:28.824957  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:59:28.844568  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:59:28.863238  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:59:28.881382  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:59:28.900337  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:59:28.919403  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:59:28.938551  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:59:28.958859  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:59:28.977347  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 20:59:28.998600  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 20:59:29.014406  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 20:59:29.027571  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 20:59:29.040968  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 20:59:29.054581  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 20:59:29.067754  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 20:59:29.080811  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:59:29.087180  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.095114  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:59:29.102755  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106745  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106853  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.152715  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:59:29.160933  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.168533  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:59:29.177095  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181103  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181174  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.222399  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:59:29.233819  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.241844  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:59:29.249788  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254119  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254190  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.295461  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:59:29.303146  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:59:29.307067  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:59:29.350787  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:59:29.392520  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:59:29.433715  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:59:29.474688  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:59:29.516288  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:59:29.557959  444203 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 20:59:29.558056  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:59:29.558087  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:59:29.558148  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:59:29.572235  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:59:29.572334  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:59:29.572441  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:59:29.580681  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:59:29.580751  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 20:59:29.588356  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:59:29.602149  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:59:29.615313  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:59:29.629715  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:59:29.633469  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:29.643261  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.776061  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.790278  444203 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:59:29.790703  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:29.794669  444203 out.go:179] * Verifying Kubernetes components...
	I1212 20:59:29.797306  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.936519  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.950752  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 20:59:29.950831  444203 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 20:59:29.952083  444203 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 20:59:31.953427  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:33.953536  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:36.453558  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:38.952703  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:41.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:43.452691  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:45.952750  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:48.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:50.452746  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:00.954217  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:00:10.954802  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:00:12.960855  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:00:12.961321  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:58400->192.168.49.2:8443: read: connection reset by peer
	W1212 21:00:15.453549  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:17.952657  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:20.453573  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:22.952730  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:24.953541  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:27.452882  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:29.952571  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:32.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:34.953509  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:37.452853  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:39.953131  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:41.953378  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:44.452656  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:46.952721  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:48.952858  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:51.452609  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:53.452824  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:55.952717  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:57.953626  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:08.953781  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:01:18.955065  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:01:20.812078  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:01:21.453435  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:23.952633  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:25.953670  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:28.453604  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:30.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:32.953585  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:34.953661  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:37.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:39.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:42.452830  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:44.952768  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:47.452920  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:49.952685  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:51.953605  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:54.452622  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:56.453648  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:58.952804  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:01.452588  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:03.452926  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:05.952702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:07.952958  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:09.953705  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:12.452917  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:14.952877  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:17.452818  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:19.952741  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:22.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:24.952709  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:27.452855  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:29.952655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:31.952748  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:33.952822  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:36.452695  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:38.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:40.452868  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:42.952779  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:44.952905  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:47.453071  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:49.453482  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:59.953684  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:03:09.954306  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:03:12.758807  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:03:12.759288  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:51776->192.168.49.2:8443: read: connection reset by peer
	W1212 21:03:14.952704  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:17.452843  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:19.952792  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:22.452752  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:24.952700  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:27.452954  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:29.952844  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:32.452952  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:34.953666  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:37.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:39.952664  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:41.952726  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:43.952797  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:45.952870  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:48.452683  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:50.453535  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:52.952772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:55.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:57.452867  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:59.952895  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:02.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:04.952915  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:07.452753  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:09.453637  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:11.952833  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:14.452718  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:16.952636  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:18.953630  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:21.452687  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:23.952770  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:25.952829  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:28.452772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:30.453677  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:32.952813  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:35.452679  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:37.453048  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:39.453453  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:41.952806  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:44.452710  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:46.952744  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:48.952846  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:51.452675  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:53.452999  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:55.952801  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:58.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:00.952662  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:02.952760  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:05.452732  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:07.452887  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:09.952790  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:11.953431  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:14.452702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:16.952708  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:19.452740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:21.453565  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:23.953569  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:25.953740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:28.452736  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1212 21:05:29.952402  444203 node_ready.go:38] duration metric: took 6m0.000280641s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:05:29.955795  444203 out.go:203] 
	W1212 21:05:29.958921  444203 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:05:29.958945  444203 out.go:285] * 
	W1212 21:05:29.961096  444203 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:05:29.963919  444203 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.573353501Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.2" id=e336470e-972a-4f5a-994c-a420cec7e1fd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.57546657Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=d075afca-5b01-4a72-af26-095e4c3fda98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.575597509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.580854253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.581359382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.601253977Z" level=info msg="Created container cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=d075afca-5b01-4a72-af26-095e4c3fda98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.602097012Z" level=info msg="Starting container: cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f" id=dbce5b8b-662d-490b-9c7e-6322afe66b97 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.604098646Z" level=info msg="Started container" PID=1222 containerID=cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f description=kube-system/kube-apiserver-ha-008703/kube-apiserver id=dbce5b8b-662d-490b-9c7e-6322afe66b97 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd00fe9660f8414338311e9c84221931557aa6e52742b6d1c070584ba8d05455
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.567811041Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=06123f68-d5a0-4e2d-b7b6-01920744fc92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.569304002Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=7187c520-d59c-408f-86c9-0f55666a4f1d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.570672884Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=d5360af4-8fe4-4c01-bc64-14e0357b8194 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.570785361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.57666159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.577156478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.596206938Z" level=info msg="Created container f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=d5360af4-8fe4-4c01-bc64-14e0357b8194 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.597045264Z" level=info msg="Starting container: f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd" id=ef0dba9e-4ceb-40c6-83a8-0ba642cb308d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.598929227Z" level=info msg="Started container" PID=1236 containerID=f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd description=kube-system/kube-controller-manager-ha-008703/kube-controller-manager id=ef0dba9e-4ceb-40c6-83a8-0ba642cb308d name=/runtime.v1.RuntimeService/StartContainer sandboxID=85be12a014baa67b64e07a5bfb74b282216901ce9944cc92b4cfb2a168b1bf90
	Dec 12 21:03:11 ha-008703 conmon[1219]: conmon cf99f099390ca3b31b52 <ninfo>: container 1222 exited with status 255
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.427216445Z" level=info msg="Removing container: cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.437307235Z" level=info msg="Error loading conmon cgroup of container cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a: cgroup deleted" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.440793584Z" level=info msg="Removed container cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:21 ha-008703 conmon[1233]: conmon f56a6db74f42e64847c6 <ninfo>: container 1236 exited with status 1
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.453363026Z" level=info msg="Removing container: a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.462475363Z" level=info msg="Error loading conmon cgroup of container a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279: cgroup deleted" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.46560056Z" level=info msg="Removed container a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f56a6db74f42e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   2 minutes ago       Exited              kube-controller-manager   6                   85be12a014baa       kube-controller-manager-ha-008703   kube-system
	cf99f099390ca       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   2 minutes ago       Exited              kube-apiserver            6                   dd00fe9660f84       kube-apiserver-ha-008703            kube-system
	dec4a7f43553c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   3 minutes ago       Running             etcd                      2                   aacc080aed809       etcd-ha-008703                      kube-system
	8df671b2f67c1       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  0                   b6145737bcabc       kube-vip-ha-008703                  kube-system
	afc1929ca6e74       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            1                   1b70e5a4174e6       kube-scheduler-ha-008703            kube-system
	d1a55d9c86371       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Exited              etcd                      1                   aacc080aed809       etcd-ha-008703                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	[Dec12 20:52] overlayfs: idmapped layers are currently not supported
	[ +33.094252] overlayfs: idmapped layers are currently not supported
	[Dec12 20:53] overlayfs: idmapped layers are currently not supported
	[Dec12 20:55] overlayfs: idmapped layers are currently not supported
	[Dec12 20:56] overlayfs: idmapped layers are currently not supported
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d1a55d9c86371ac0863607a8786cbe02fed629a5326460325861f8f7188e31b3] <==
	{"level":"warn","ts":"2025-12-12T21:01:55.687759Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703030Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703048Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703082Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"info","ts":"2025-12-12T21:01:55.709338Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709404Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709432Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709445Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709498Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709515Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:01:55.996239Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:56.496433Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:56.997590Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:57.498752Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-12T21:01:57.609514Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609568Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609589Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609600Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609635Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609646Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:01:57.683100Z","caller":"etcdserver/server.go:1830","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-008703 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"warn","ts":"2025-12-12T21:01:57.990505Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-12-12T21:01:57.990631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000611314s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-12-12T21:01:57.990677Z","caller":"traceutil/trace.go:172","msg":"trace[1992001531] range","detail":"{range_begin:; range_end:; }","duration":"7.000675488s","start":"2025-12-12T21:01:50.989989Z","end":"2025-12-12T21:01:57.990665Z","steps":["trace[1992001531] 'agreement among raft nodes before linearized reading'  (duration: 7.000604562s)"],"step_count":1}
	{"level":"error","ts":"2025-12-12T21:01:57.990777Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	
	
	==> etcd [dec4a7f43553c1db233f4e5d7706cfb990da47b7ae97783a399590896902caa9] <==
	{"level":"info","ts":"2025-12-12T21:05:27.875534Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:27.875549Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:27.875581Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:27.875592Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:27.990305Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-12-12T21:05:27.990477Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000511282s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-12-12T21:05:27.990511Z","caller":"traceutil/trace.go:172","msg":"trace[1394754012] range","detail":"{range_begin:; range_end:; }","duration":"7.000565658s","start":"2025-12-12T21:05:20.989935Z","end":"2025-12-12T21:05:27.990501Z","steps":["trace[1394754012] 'agreement among raft nodes before linearized reading'  (duration: 7.000508435s)"],"step_count":1}
	{"level":"error","ts":"2025-12-12T21:05:27.990553Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[+]data_corruption ok\n[+]serializable_read ok\n[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	{"level":"warn","ts":"2025-12-12T21:05:28.221546Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:05:28.221532Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:05:28.221587Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-12-12T21:05:28.221604Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"info","ts":"2025-12-12T21:05:29.274586Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:29.274639Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:29.274661Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:29.274672Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:29.274708Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:29.274718Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:29.496228Z","caller":"etcdserver/server.go:1830","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-008703 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"info","ts":"2025-12-12T21:05:30.675430Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675488Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675508Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675520Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675547Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675557Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	
	
	==> kernel <==
	 21:05:31 up  3:48,  0 user,  load average: 0.02, 0.49, 0.75
	Linux ha-008703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f] <==
	I1212 21:02:49.657910       1 server.go:150] Version: v1.34.2
	I1212 21:02:49.657951       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1212 21:02:51.709271       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1212 21:02:51.709302       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1212 21:02:51.709311       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1212 21:02:51.709316       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1212 21:02:51.709320       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1212 21:02:51.709325       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1212 21:02:51.709330       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1212 21:02:51.709335       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1212 21:02:51.709339       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1212 21:02:51.709343       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1212 21:02:51.709348       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1212 21:02:51.709352       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1212 21:02:51.728675       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1212 21:02:51.730483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1212 21:02:51.730660       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 21:02:51.741349       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:02:51.747924       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1212 21:02:51.748039       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1212 21:02:51.748302       1 instance.go:239] Using reconciler: lease
	W1212 21:02:51.749591       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:03:11.724871       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 21:03:11.728050       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1212 21:03:11.750018       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd] <==
	I1212 21:03:00.877789       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:03:01.516406       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1212 21:03:01.516437       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:03:01.517922       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 21:03:01.518179       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 21:03:01.518334       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 21:03:01.518416       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 21:03:21.521653       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [afc1929ca6e740de8c3a64acc626b0e59ca06f13bd451285650a7214808d9608] <==
	E1212 21:04:32.550798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:04:37.379816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:04:48.233878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 21:04:49.508637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:04:49.753068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:04:51.098987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:04:51.493097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:04:52.085286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 21:04:54.810851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:04:56.542675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:04:59.479194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:05:01.280059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 21:05:03.231202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:05:05.294802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:05:06.646919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:05:07.513445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 21:05:11.215768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:05:14.637981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:05:14.678743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:05:18.402248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:05:21.622956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:05:22.967707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:05:27.893192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:05:30.607858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:05:30.694995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	
	
	==> kubelet <==
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.084210     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.185324     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.286080     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.387315     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.489045     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.584220     802 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-008703\" not found"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.590034     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.691899     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.793114     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.893990     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:29 ha-008703 kubelet[802]: E1212 21:05:29.996981     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.098463     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.199534     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.301092     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.402421     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.502874     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.603683     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.704274     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.805656     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:30 ha-008703 kubelet[802]: E1212 21:05:30.906675     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.007380     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.108359     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.209066     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.309769     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.410955     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703: exit status 2 (337.825754ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "ha-008703" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (507.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (2.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-008703 node delete m03 --alsologtostderr -v 5: exit status 83 (185.283831ms)

                                                
                                                
-- stdout --
	* The control-plane node ha-008703-m03 host is not running: state=Stopped
	  To start a cluster, run: "minikube start -p ha-008703"

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:05:31.930816  447726 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:05:31.932197  447726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:31.932240  447726 out.go:374] Setting ErrFile to fd 2...
	I1212 21:05:31.932260  447726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:31.932591  447726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:05:31.932961  447726 mustload.go:66] Loading cluster: ha-008703
	I1212 21:05:31.933482  447726 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:31.934014  447726 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:31.951961  447726 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:05:31.952274  447726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:32.019778  447726 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:05:31.997610517 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:32.020204  447726 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:32.037401  447726 host.go:66] Checking if "ha-008703-m02" exists ...
	I1212 21:05:32.037928  447726 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:05:32.059712  447726 out.go:179] * The control-plane node ha-008703-m03 host is not running: state=Stopped
	I1212 21:05:32.062555  447726 out.go:179]   To start a cluster, run: "minikube start -p ha-008703"

                                                
                                                
** /stderr **
ha_test.go:491: node delete returned an error. args "out/minikube-linux-arm64 -p ha-008703 node delete m03 --alsologtostderr -v 5": exit status 83
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5: exit status 7 (557.00522ms)

                                                
                                                
-- stdout --
	ha-008703
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-008703-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-008703-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-008703-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:05:32.126242  447778 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:05:32.126419  447778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:32.126449  447778 out.go:374] Setting ErrFile to fd 2...
	I1212 21:05:32.126470  447778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:32.126746  447778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:05:32.126965  447778 out.go:368] Setting JSON to false
	I1212 21:05:32.127019  447778 mustload.go:66] Loading cluster: ha-008703
	I1212 21:05:32.127096  447778 notify.go:221] Checking for updates...
	I1212 21:05:32.128075  447778 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:32.128127  447778 status.go:174] checking status of ha-008703 ...
	I1212 21:05:32.128718  447778 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:32.149173  447778 status.go:371] ha-008703 host status = "Running" (err=<nil>)
	I1212 21:05:32.149195  447778 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:05:32.149736  447778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:32.181797  447778 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:05:32.182129  447778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:32.182176  447778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:32.199528  447778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:32.305995  447778 ssh_runner.go:195] Run: systemctl --version
	I1212 21:05:32.312673  447778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:05:32.325514  447778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:32.390199  447778 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:05:32.38087126 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:32.390770  447778 kubeconfig.go:125] found "ha-008703" server: "https://192.168.49.254:8443"
	I1212 21:05:32.390811  447778 api_server.go:166] Checking apiserver status ...
	I1212 21:05:32.390855  447778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:05:32.401229  447778 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:32.401255  447778 status.go:463] ha-008703 apiserver status = Running (err=<nil>)
	I1212 21:05:32.401265  447778 status.go:176] ha-008703 status: &{Name:ha-008703 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:05:32.401282  447778 status.go:174] checking status of ha-008703-m02 ...
	I1212 21:05:32.401611  447778 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:32.418400  447778 status.go:371] ha-008703-m02 host status = "Running" (err=<nil>)
	I1212 21:05:32.418424  447778 host.go:66] Checking if "ha-008703-m02" exists ...
	I1212 21:05:32.418729  447778 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:32.436671  447778 host.go:66] Checking if "ha-008703-m02" exists ...
	I1212 21:05:32.436983  447778 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:32.437026  447778 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:32.455306  447778 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:32.557678  447778 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:05:32.573889  447778 kubeconfig.go:125] found "ha-008703" server: "https://192.168.49.254:8443"
	I1212 21:05:32.573924  447778 api_server.go:166] Checking apiserver status ...
	I1212 21:05:32.573976  447778 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1212 21:05:32.584857  447778 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:32.584878  447778 status.go:463] ha-008703-m02 apiserver status = Running (err=<nil>)
	I1212 21:05:32.584886  447778 status.go:176] ha-008703-m02 status: &{Name:ha-008703-m02 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:05:32.584903  447778 status.go:174] checking status of ha-008703-m03 ...
	I1212 21:05:32.585220  447778 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:05:32.603371  447778 status.go:371] ha-008703-m03 host status = "Stopped" (err=<nil>)
	I1212 21:05:32.603391  447778 status.go:384] host is not running, skipping remaining checks
	I1212 21:05:32.603397  447778 status.go:176] ha-008703-m03 status: &{Name:ha-008703-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:05:32.603418  447778 status.go:174] checking status of ha-008703-m04 ...
	I1212 21:05:32.603726  447778 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:05:32.620951  447778 status.go:371] ha-008703-m04 host status = "Stopped" (err=<nil>)
	I1212 21:05:32.620989  447778 status.go:384] host is not running, skipping remaining checks
	I1212 21:05:32.620997  447778 status.go:176] ha-008703-m04 status: &{Name:ha-008703-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 444329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:57:42.720187316Z",
	            "FinishedAt": "2025-12-12T20:57:42.030104403Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "493c43ee23fd8e7f78466c871a302edc137070db11f7e6b5d032ce802f3f0262",
	            "SandboxKey": "/var/run/docker/netns/493c43ee23fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:72:e3:2e:78:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "43672cbb724d118edeacd3584cc29f7251f2a336562cd7d37b8d180ba19da903",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703: exit status 2 (321.92435ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 logs -n 25
helpers_test.go:261: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt                                                            │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt                                                │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node start m02 --alsologtostderr -v 5                                                                                     │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:57 UTC │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │ 12 Dec 25 20:57 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5                                                                                  │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ node    │ ha-008703 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:57:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:57:42.443959  444203 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:57:42.444139  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444170  444203 out.go:374] Setting ErrFile to fd 2...
	I1212 20:57:42.444190  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444488  444203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:57:42.444894  444203 out.go:368] Setting JSON to false
	I1212 20:57:42.445764  444203 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13215,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:57:42.445866  444203 start.go:143] virtualization:  
	I1212 20:57:42.448973  444203 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:57:42.452845  444203 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:57:42.452922  444203 notify.go:221] Checking for updates...
	I1212 20:57:42.458690  444203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:57:42.461546  444203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:42.464549  444203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:57:42.467438  444203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:57:42.470311  444203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:57:42.473663  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:42.473791  444203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:57:42.502175  444203 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:57:42.502305  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.567154  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.556873235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.567281  444203 docker.go:319] overlay module found
	I1212 20:57:42.570683  444203 out.go:179] * Using the docker driver based on existing profile
	I1212 20:57:42.573609  444203 start.go:309] selected driver: docker
	I1212 20:57:42.573638  444203 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.573801  444203 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:57:42.573920  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.631794  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.621825898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.632218  444203 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:57:42.632254  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:42.632316  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:42.632425  444203 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.635654  444203 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 20:57:42.638374  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:42.641273  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:42.644097  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:42.644143  444203 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 20:57:42.644156  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:42.644194  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:42.644262  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:42.644272  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:42.644440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.664350  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:42.664409  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:42.664432  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:42.664465  444203 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:42.664534  444203 start.go:364] duration metric: took 45.473µs to acquireMachinesLock for "ha-008703"
	I1212 20:57:42.664558  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:42.664567  444203 fix.go:54] fixHost starting: 
	I1212 20:57:42.664830  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.682444  444203 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 20:57:42.682482  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:42.687702  444203 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 20:57:42.687806  444203 cli_runner.go:164] Run: docker start ha-008703
	I1212 20:57:42.929392  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.950691  444203 kic.go:430] container "ha-008703" state is running.
	I1212 20:57:42.951124  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:42.975911  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.976159  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:42.976233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:43.000950  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:43.001319  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:43.001348  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:43.002175  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:46.155930  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.155957  444203 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 20:57:46.156028  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.174281  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.174613  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.174631  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 20:57:46.334176  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.334256  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.353092  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.353419  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.353444  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:46.504764  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:46.504855  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:46.504906  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:46.504931  444203 provision.go:84] configureAuth start
	I1212 20:57:46.505018  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:46.522153  444203 provision.go:143] copyHostCerts
	I1212 20:57:46.522196  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522237  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:46.522245  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522321  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:46.522414  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522431  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:46.522435  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522464  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:46.522512  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522532  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:46.522536  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522563  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:46.522618  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 20:57:46.651816  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:46.651886  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:46.651968  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.671188  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:46.776309  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:46.776386  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:46.794675  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:46.794741  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 20:57:46.813024  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:46.813085  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:46.830950  444203 provision.go:87] duration metric: took 325.983006ms to configureAuth
	I1212 20:57:46.830977  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:46.831235  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:46.831340  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.848478  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.848794  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.848812  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:47.235920  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:47.235994  444203 machine.go:97] duration metric: took 4.259816851s to provisionDockerMachine
	I1212 20:57:47.236020  444203 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 20:57:47.236048  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:47.236157  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:47.236233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.261608  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.368446  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:47.372121  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:47.372152  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:47.372170  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:47.372227  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:47.372309  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:47.372320  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:47.372447  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:47.380725  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:47.398959  444203 start.go:296] duration metric: took 162.907605ms for postStartSetup
	I1212 20:57:47.399064  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:47.399134  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.420756  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.525530  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:47.530321  444203 fix.go:56] duration metric: took 4.865746757s for fixHost
	I1212 20:57:47.530348  444203 start.go:83] releasing machines lock for "ha-008703", held for 4.865800567s
	I1212 20:57:47.530419  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:47.548629  444203 ssh_runner.go:195] Run: cat /version.json
	I1212 20:57:47.548688  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.548950  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:47.549003  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.573240  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.580519  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.676043  444203 ssh_runner.go:195] Run: systemctl --version
	I1212 20:57:47.771712  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:47.808898  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:47.813508  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:47.813590  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:47.821723  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:47.821748  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:47.821827  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:47.821894  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:47.837549  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:47.851337  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:47.851435  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:47.867827  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:47.881469  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:47.990806  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:48.117810  444203 docker.go:234] disabling docker service ...
	I1212 20:57:48.117891  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:48.133641  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:48.146962  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:48.263631  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:48.385870  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:48.400502  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:48.415928  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:48.415999  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.425436  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:48.425516  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.434622  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.443654  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.452998  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:48.462000  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.471517  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.480019  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.488892  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:48.501776  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:48.509429  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:48.636874  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:57:48.831677  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:57:48.831797  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:57:48.835749  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:57:48.835860  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:57:48.839496  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:57:48.865845  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:57:48.865936  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.896176  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.926063  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:57:48.928824  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:57:48.945819  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:57:48.949721  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:48.960274  444203 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:57:48.960470  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:48.960528  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:48.995177  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:48.995203  444203 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:57:48.995261  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:49.022349  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:49.022375  444203 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:57:49.022384  444203 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 20:57:49.022522  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:57:49.022613  444203 ssh_runner.go:195] Run: crio config
	I1212 20:57:49.094808  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:49.094833  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:49.094884  444203 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:57:49.094931  444203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:57:49.095072  444203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:57:49.095097  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:57:49.095151  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:57:49.107313  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:49.107428  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:57:49.107499  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:57:49.115345  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:57:49.115415  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 20:57:49.123505  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 20:57:49.136430  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:57:49.149479  444203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 20:57:49.163560  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:57:49.176571  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:57:49.180272  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:49.190686  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:49.306812  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:57:49.322473  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 20:57:49.322495  444203 certs.go:195] generating shared ca certs ...
	I1212 20:57:49.322510  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.322646  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:57:49.322706  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:57:49.322721  444203 certs.go:257] generating profile certs ...
	I1212 20:57:49.322803  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:57:49.322831  444203 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 20:57:49.322854  444203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1212 20:57:49.472738  444203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 ...
	I1212 20:57:49.472774  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904: {Name:mk2a5379bc5668a2307c7e3ec981ab026dda45c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.472981  444203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 ...
	I1212 20:57:49.473001  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904: {Name:mk9431140de21966b13bcbc9ba3792a6b7192788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.473093  444203 certs.go:382] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt
	I1212 20:57:49.473241  444203 certs.go:386] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key
	I1212 20:57:49.473382  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:57:49.473401  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:57:49.473419  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:57:49.473436  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:57:49.473449  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:57:49.473464  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:57:49.473478  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:57:49.473493  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:57:49.473504  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:57:49.473559  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:57:49.473598  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:57:49.473610  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:57:49.473644  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:57:49.473680  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:57:49.473711  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:57:49.473759  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:49.473803  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.473819  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.473830  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.474446  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:57:49.501229  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:57:49.522107  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:57:49.550223  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:57:49.582434  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:57:49.603191  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:57:49.623340  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:57:49.644021  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:57:49.665040  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:57:49.686900  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:57:49.705561  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:57:49.723829  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:57:49.737136  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:57:49.743571  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.751112  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:57:49.759507  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763360  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763427  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.804630  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:57:49.811926  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.819270  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:57:49.826837  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830838  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830912  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.872515  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:57:49.880250  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.887711  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:57:49.895442  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899072  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899140  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.940560  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:57:49.948269  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:57:49.952111  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:57:49.994329  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:57:50.049087  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:57:50.098831  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:57:50.155411  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:57:50.252310  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:57:50.338780  444203 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:50.338978  444203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:57:50.339069  444203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:57:50.398773  444203 cri.go:89] found id: "8df671b2f67c1fea6933eed59bb0ed038b61ceb87afc6b29bfda67eb56bf94c5"
	I1212 20:57:50.398830  444203 cri.go:89] found id: "153af1b54c51e5f4602a99ee68deeb035520f031d6275c686a4d837adf8c7a9b"
	I1212 20:57:50.398851  444203 cri.go:89] found id: "afc1929ca6e740de8c3a64acc626b0e59ca06f13bd451285650a7214808d9608"
	I1212 20:57:50.398870  444203 cri.go:89] found id: "3df9e833b1b81ce05c8ed6dff7db997b5fe66bf67be14061cdbe13efd2dd87cf"
	I1212 20:57:50.398889  444203 cri.go:89] found id: "d1a55d9c86371ac0863607a8786cbe02fed629a5326460325861f8f7188e31b3"
	I1212 20:57:50.398924  444203 cri.go:89] found id: ""
	I1212 20:57:50.399008  444203 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:57:50.427490  444203 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:57:50Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:57:50.427620  444203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:57:50.436409  444203 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:57:50.436480  444203 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:57:50.436565  444203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:57:50.450254  444203 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:50.450723  444203 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.450865  444203 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 20:57:50.451184  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.451770  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:57:50.452602  444203 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:57:50.452649  444203 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:57:50.452671  444203 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:57:50.452699  444203 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:57:50.452724  444203 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:57:50.452669  444203 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:57:50.453064  444203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:57:50.471433  444203 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:57:50.471502  444203 kubeadm.go:602] duration metric: took 34.98508ms to restartPrimaryControlPlane
	I1212 20:57:50.471528  444203 kubeadm.go:403] duration metric: took 132.757161ms to StartCluster
	I1212 20:57:50.471560  444203 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.471649  444203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.472264  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.472602  444203 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:57:50.472664  444203 start.go:242] waiting for startup goroutines ...
	I1212 20:57:50.472701  444203 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:57:50.473166  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.478696  444203 out.go:179] * Enabled addons: 
	I1212 20:57:50.481795  444203 addons.go:530] duration metric: took 9.096965ms for enable addons: enabled=[]
	I1212 20:57:50.481888  444203 start.go:247] waiting for cluster config update ...
	I1212 20:57:50.481913  444203 start.go:256] writing updated cluster config ...
	I1212 20:57:50.485267  444203 out.go:203] 
	I1212 20:57:50.488653  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.488812  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.492075  444203 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 20:57:50.494987  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:50.498206  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:50.501052  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:50.501113  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:50.501125  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:50.501268  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:50.501295  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:50.501440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.539828  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:50.539849  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:50.539866  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:50.539902  444203 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:50.539970  444203 start.go:364] duration metric: took 48.32µs to acquireMachinesLock for "ha-008703-m02"
	I1212 20:57:50.539997  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:50.540008  444203 fix.go:54] fixHost starting: m02
	I1212 20:57:50.540289  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:50.570630  444203 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 20:57:50.570662  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:50.573920  444203 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 20:57:50.574010  444203 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 20:57:51.021435  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:51.051445  444203 kic.go:430] container "ha-008703-m02" state is running.
	I1212 20:57:51.051835  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:51.081868  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:51.082129  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:51.082189  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:51.114065  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:51.114398  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:51.114407  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:51.115163  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:54.335915  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.335981  444203 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 20:57:54.336094  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.365312  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.365660  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.365676  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 20:57:54.750173  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.750344  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.784610  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.784933  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.784950  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:55.052390  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:55.052421  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:55.052440  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:55.052459  444203 provision.go:84] configureAuth start
	I1212 20:57:55.052553  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:55.106212  444203 provision.go:143] copyHostCerts
	I1212 20:57:55.106261  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106295  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:55.106307  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106385  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:55.106475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106498  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:55.106503  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106533  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:55.106577  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106598  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:55.106605  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106631  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:55.106681  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 20:57:55.315977  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:55.316047  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:55.316093  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.334254  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:55.478383  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:55.478451  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:57:55.517393  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:55.517463  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:55.542182  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:55.542251  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:55.571975  444203 provision.go:87] duration metric: took 519.496148ms to configureAuth
	I1212 20:57:55.572013  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:55.572281  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:55.572439  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.601112  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:55.601422  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:55.601436  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:56.060871  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:56.060947  444203 machine.go:97] duration metric: took 4.978806446s to provisionDockerMachine
	I1212 20:57:56.060977  444203 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 20:57:56.061019  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:56.061131  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:56.061204  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.079622  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.188393  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:56.191735  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:56.191761  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:56.191773  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:56.191830  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:56.191915  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:56.191925  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:56.192023  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:56.199559  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:56.216617  444203 start.go:296] duration metric: took 155.610404ms for postStartSetup
	I1212 20:57:56.216698  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:56.216740  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.233309  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.337931  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:56.342965  444203 fix.go:56] duration metric: took 5.802950492s for fixHost
	I1212 20:57:56.342991  444203 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.803007207s
	I1212 20:57:56.343061  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:56.364818  444203 out.go:179] * Found network options:
	I1212 20:57:56.367652  444203 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 20:57:56.370401  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 20:57:56.370443  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 20:57:56.370511  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:56.370552  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.370593  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:56.370646  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.391626  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.398057  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.575929  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:56.710881  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:56.710966  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:56.722145  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:56.722217  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:56.722266  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:56.722342  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:56.742981  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:56.765595  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:56.765706  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:56.793166  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:56.814044  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:57.024630  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:57.240003  444203 docker.go:234] disabling docker service ...
	I1212 20:57:57.240088  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:57.260709  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:57.276845  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:57.490011  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:57.701011  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:57.718231  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:57.734672  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:57.734758  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.752791  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:57.752868  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.767185  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.783487  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.798836  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:57.808080  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.821261  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.835565  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.848412  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:57.861550  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:57.870875  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:58.097322  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:59:28.418240  444203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320839757s)
	I1212 20:59:28.418266  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:59:28.418318  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:59:28.421907  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:59:28.421970  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:59:28.425474  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:59:28.451137  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:59:28.451224  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.487374  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.523846  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:59:28.527097  444203 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 20:59:28.530093  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:59:28.546578  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:59:28.550700  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:28.561522  444203 mustload.go:66] Loading cluster: ha-008703
	I1212 20:59:28.561768  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:28.562034  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:59:28.579699  444203 host.go:66] Checking if "ha-008703" exists ...
	I1212 20:59:28.579981  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 20:59:28.579989  444203 certs.go:195] generating shared ca certs ...
	I1212 20:59:28.580003  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:59:28.580127  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:59:28.580165  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:59:28.580173  444203 certs.go:257] generating profile certs ...
	I1212 20:59:28.580247  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:59:28.580315  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 20:59:28.580355  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:59:28.580363  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:59:28.580407  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:59:28.580418  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:59:28.580430  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:59:28.580441  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:59:28.580452  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:59:28.580465  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:59:28.580475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:59:28.580526  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:59:28.580557  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:59:28.580565  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:59:28.580591  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:59:28.580614  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:59:28.580640  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:59:28.580684  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:59:28.580713  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:59:28.580727  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:59:28.580738  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:28.580791  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:59:28.597816  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:59:28.696708  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 20:59:28.700659  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 20:59:28.709283  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 20:59:28.713481  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 20:59:28.721707  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 20:59:28.725369  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 20:59:28.733654  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 20:59:28.737443  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 20:59:28.745834  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 20:59:28.749617  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 20:59:28.758164  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 20:59:28.761831  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 20:59:28.770067  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:59:28.787610  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:59:28.806372  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:59:28.824957  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:59:28.844568  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:59:28.863238  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:59:28.881382  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:59:28.900337  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:59:28.919403  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:59:28.938551  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:59:28.958859  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:59:28.977347  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 20:59:28.998600  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 20:59:29.014406  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 20:59:29.027571  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 20:59:29.040968  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 20:59:29.054581  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 20:59:29.067754  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 20:59:29.080811  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:59:29.087180  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.095114  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:59:29.102755  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106745  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106853  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.152715  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:59:29.160933  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.168533  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:59:29.177095  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181103  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181174  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.222399  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:59:29.233819  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.241844  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:59:29.249788  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254119  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254190  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.295461  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:59:29.303146  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:59:29.307067  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:59:29.350787  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:59:29.392520  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:59:29.433715  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:59:29.474688  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:59:29.516288  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:59:29.557959  444203 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 20:59:29.558056  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:59:29.558087  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:59:29.558148  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:59:29.572235  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:59:29.572334  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:59:29.572441  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:59:29.580681  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:59:29.580751  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 20:59:29.588356  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:59:29.602149  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:59:29.615313  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:59:29.629715  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:59:29.633469  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:29.643261  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.776061  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.790278  444203 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:59:29.790703  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:29.794669  444203 out.go:179] * Verifying Kubernetes components...
	I1212 20:59:29.797306  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.936519  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.950752  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 20:59:29.950831  444203 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 20:59:29.952083  444203 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 20:59:31.953427  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:33.953536  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:36.453558  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:38.952703  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:41.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:43.452691  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:45.952750  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:48.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:50.452746  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:00.954217  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:00:10.954802  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:00:12.960855  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:00:12.961321  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:58400->192.168.49.2:8443: read: connection reset by peer
	W1212 21:00:15.453549  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:17.952657  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:20.453573  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:22.952730  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:24.953541  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:27.452882  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:29.952571  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:32.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:34.953509  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:37.452853  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:39.953131  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:41.953378  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:44.452656  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:46.952721  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:48.952858  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:51.452609  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:53.452824  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:55.952717  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:57.953626  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:08.953781  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:01:18.955065  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:01:20.812078  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:01:21.453435  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:23.952633  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:25.953670  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:28.453604  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:30.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:32.953585  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:34.953661  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:37.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:39.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:42.452830  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:44.952768  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:47.452920  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:49.952685  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:51.953605  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:54.452622  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:56.453648  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:58.952804  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:01.452588  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:03.452926  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:05.952702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:07.952958  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:09.953705  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:12.452917  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:14.952877  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:17.452818  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:19.952741  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:22.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:24.952709  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:27.452855  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:29.952655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:31.952748  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:33.952822  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:36.452695  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:38.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:40.452868  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:42.952779  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:44.952905  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:47.453071  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:49.453482  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:59.953684  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:03:09.954306  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:03:12.758807  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:03:12.759288  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:51776->192.168.49.2:8443: read: connection reset by peer
	W1212 21:03:14.952704  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:17.452843  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:19.952792  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:22.452752  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:24.952700  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:27.452954  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:29.952844  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:32.452952  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:34.953666  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:37.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:39.952664  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:41.952726  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:43.952797  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:45.952870  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:48.452683  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:50.453535  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:52.952772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:55.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:57.452867  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:59.952895  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:02.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:04.952915  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:07.452753  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:09.453637  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:11.952833  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:14.452718  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:16.952636  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:18.953630  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:21.452687  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:23.952770  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:25.952829  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:28.452772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:30.453677  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:32.952813  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:35.452679  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:37.453048  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:39.453453  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:41.952806  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:44.452710  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:46.952744  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:48.952846  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:51.452675  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:53.452999  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:55.952801  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:58.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:00.952662  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:02.952760  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:05.452732  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:07.452887  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:09.952790  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:11.953431  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:14.452702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:16.952708  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:19.452740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:21.453565  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:23.953569  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:25.953740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:28.452736  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1212 21:05:29.952402  444203 node_ready.go:38] duration metric: took 6m0.000280641s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:05:29.955795  444203 out.go:203] 
	W1212 21:05:29.958921  444203 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:05:29.958945  444203 out.go:285] * 
	W1212 21:05:29.961096  444203 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:05:29.963919  444203 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.573353501Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.2" id=e336470e-972a-4f5a-994c-a420cec7e1fd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.57546657Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=d075afca-5b01-4a72-af26-095e4c3fda98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.575597509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.580854253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.581359382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.601253977Z" level=info msg="Created container cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=d075afca-5b01-4a72-af26-095e4c3fda98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.602097012Z" level=info msg="Starting container: cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f" id=dbce5b8b-662d-490b-9c7e-6322afe66b97 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.604098646Z" level=info msg="Started container" PID=1222 containerID=cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f description=kube-system/kube-apiserver-ha-008703/kube-apiserver id=dbce5b8b-662d-490b-9c7e-6322afe66b97 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd00fe9660f8414338311e9c84221931557aa6e52742b6d1c070584ba8d05455
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.567811041Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=06123f68-d5a0-4e2d-b7b6-01920744fc92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.569304002Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=7187c520-d59c-408f-86c9-0f55666a4f1d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.570672884Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=d5360af4-8fe4-4c01-bc64-14e0357b8194 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.570785361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.57666159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.577156478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.596206938Z" level=info msg="Created container f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=d5360af4-8fe4-4c01-bc64-14e0357b8194 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.597045264Z" level=info msg="Starting container: f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd" id=ef0dba9e-4ceb-40c6-83a8-0ba642cb308d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.598929227Z" level=info msg="Started container" PID=1236 containerID=f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd description=kube-system/kube-controller-manager-ha-008703/kube-controller-manager id=ef0dba9e-4ceb-40c6-83a8-0ba642cb308d name=/runtime.v1.RuntimeService/StartContainer sandboxID=85be12a014baa67b64e07a5bfb74b282216901ce9944cc92b4cfb2a168b1bf90
	Dec 12 21:03:11 ha-008703 conmon[1219]: conmon cf99f099390ca3b31b52 <ninfo>: container 1222 exited with status 255
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.427216445Z" level=info msg="Removing container: cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.437307235Z" level=info msg="Error loading conmon cgroup of container cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a: cgroup deleted" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.440793584Z" level=info msg="Removed container cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:21 ha-008703 conmon[1233]: conmon f56a6db74f42e64847c6 <ninfo>: container 1236 exited with status 1
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.453363026Z" level=info msg="Removing container: a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.462475363Z" level=info msg="Error loading conmon cgroup of container a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279: cgroup deleted" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.46560056Z" level=info msg="Removed container a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f56a6db74f42e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   2 minutes ago       Exited              kube-controller-manager   6                   85be12a014baa       kube-controller-manager-ha-008703   kube-system
	cf99f099390ca       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   2 minutes ago       Exited              kube-apiserver            6                   dd00fe9660f84       kube-apiserver-ha-008703            kube-system
	dec4a7f43553c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   3 minutes ago       Running             etcd                      2                   aacc080aed809       etcd-ha-008703                      kube-system
	8df671b2f67c1       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  0                   b6145737bcabc       kube-vip-ha-008703                  kube-system
	afc1929ca6e74       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            1                   1b70e5a4174e6       kube-scheduler-ha-008703            kube-system
	d1a55d9c86371       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Exited              etcd                      1                   aacc080aed809       etcd-ha-008703                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	[Dec12 20:52] overlayfs: idmapped layers are currently not supported
	[ +33.094252] overlayfs: idmapped layers are currently not supported
	[Dec12 20:53] overlayfs: idmapped layers are currently not supported
	[Dec12 20:55] overlayfs: idmapped layers are currently not supported
	[Dec12 20:56] overlayfs: idmapped layers are currently not supported
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d1a55d9c86371ac0863607a8786cbe02fed629a5326460325861f8f7188e31b3] <==
	{"level":"warn","ts":"2025-12-12T21:01:55.687759Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703030Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703048Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703082Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"info","ts":"2025-12-12T21:01:55.709338Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709404Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709432Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709445Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709498Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709515Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:01:55.996239Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:56.496433Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:56.997590Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:57.498752Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-12T21:01:57.609514Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609568Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609589Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609600Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609635Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609646Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:01:57.683100Z","caller":"etcdserver/server.go:1830","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-008703 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"warn","ts":"2025-12-12T21:01:57.990505Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-12-12T21:01:57.990631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000611314s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-12-12T21:01:57.990677Z","caller":"traceutil/trace.go:172","msg":"trace[1992001531] range","detail":"{range_begin:; range_end:; }","duration":"7.000675488s","start":"2025-12-12T21:01:50.989989Z","end":"2025-12-12T21:01:57.990665Z","steps":["trace[1992001531] 'agreement among raft nodes before linearized reading'  (duration: 7.000604562s)"],"step_count":1}
	{"level":"error","ts":"2025-12-12T21:01:57.990777Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	
	
	==> etcd [dec4a7f43553c1db233f4e5d7706cfb990da47b7ae97783a399590896902caa9] <==
	{"level":"info","ts":"2025-12-12T21:05:30.675508Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675520Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675547Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:30.675557Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:31.490443Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:31.990756Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-12T21:05:32.074740Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:32.074788Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:32.074809Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:32.074820Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:32.074856Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:32.074866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:32.492565Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:32.992753Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222191Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222244Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222210Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222273Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"info","ts":"2025-12-12T21:05:33.474917Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.474966Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.474986Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.474998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.475033Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.475045Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:33.493067Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 21:05:33 up  3:48,  0 user,  load average: 0.02, 0.49, 0.75
	Linux ha-008703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f] <==
	I1212 21:02:49.657910       1 server.go:150] Version: v1.34.2
	I1212 21:02:49.657951       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1212 21:02:51.709271       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1212 21:02:51.709302       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1212 21:02:51.709311       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1212 21:02:51.709316       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1212 21:02:51.709320       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1212 21:02:51.709325       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1212 21:02:51.709330       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1212 21:02:51.709335       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1212 21:02:51.709339       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1212 21:02:51.709343       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1212 21:02:51.709348       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1212 21:02:51.709352       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1212 21:02:51.728675       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1212 21:02:51.730483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1212 21:02:51.730660       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 21:02:51.741349       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:02:51.747924       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1212 21:02:51.748039       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1212 21:02:51.748302       1 instance.go:239] Using reconciler: lease
	W1212 21:02:51.749591       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:03:11.724871       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 21:03:11.728050       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1212 21:03:11.750018       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd] <==
	I1212 21:03:00.877789       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:03:01.516406       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1212 21:03:01.516437       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:03:01.517922       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 21:03:01.518179       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 21:03:01.518334       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 21:03:01.518416       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 21:03:21.521653       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [afc1929ca6e740de8c3a64acc626b0e59ca06f13bd451285650a7214808d9608] <==
	E1212 21:04:32.550798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:04:37.379816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:04:48.233878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 21:04:49.508637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:04:49.753068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:04:51.098987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:04:51.493097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:04:52.085286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 21:04:54.810851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:04:56.542675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:04:59.479194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:05:01.280059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 21:05:03.231202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:05:05.294802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:05:06.646919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:05:07.513445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 21:05:11.215768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:05:14.637981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:05:14.678743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:05:18.402248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:05:21.622956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:05:22.967707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:05:27.893192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:05:30.607858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:05:30.694995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	
	
	==> kubelet <==
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.798241     802 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://192.168.49.2:8443/api/v1/nodes\": dial tcp 192.168.49.2:8443: connect: connection refused" node="ha-008703"
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.816225     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:31 ha-008703 kubelet[802]: E1212 21:05:31.917399     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.018240     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.118919     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.219571     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.320470     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.421849     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.522620     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.623791     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.725112     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.788798     802 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.49.2:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-008703?timeout=10s\": dial tcp 192.168.49.2:8443: connect: connection refused" interval="7s"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.826170     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:32 ha-008703 kubelet[802]: E1212 21:05:32.927238     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.028795     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.129487     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.230429     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.331268     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.382960     802 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://192.168.49.2:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.432819     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.533695     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.635018     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.735992     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.836823     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:33 ha-008703 kubelet[802]: E1212 21:05:33.937838     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703: exit status 2 (323.743214ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "ha-008703" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (2.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:415: expected profile "ha-008703" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008703\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-008703\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSS
haresRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-008703\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{
\"Name\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvid
ia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizat
ions\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 444329,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:57:42.720187316Z",
	            "FinishedAt": "2025-12-12T20:57:42.030104403Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "493c43ee23fd8e7f78466c871a302edc137070db11f7e6b5d032ce802f3f0262",
	            "SandboxKey": "/var/run/docker/netns/493c43ee23fd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33192"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33194"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:72:e3:2e:78:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "43672cbb724d118edeacd3584cc29f7251f2a336562cd7d37b8d180ba19da903",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703: exit status 2 (337.434678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 logs -n 25
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt                                                            │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt                                                │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node start m02 --alsologtostderr -v 5                                                                                     │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:57 UTC │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │ 12 Dec 25 20:57 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5                                                                                  │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ node    │ ha-008703 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:57:42
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:57:42.443959  444203 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:57:42.444139  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444170  444203 out.go:374] Setting ErrFile to fd 2...
	I1212 20:57:42.444190  444203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:57:42.444488  444203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:57:42.444894  444203 out.go:368] Setting JSON to false
	I1212 20:57:42.445764  444203 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13215,"bootTime":1765559848,"procs":160,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:57:42.445866  444203 start.go:143] virtualization:  
	I1212 20:57:42.448973  444203 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:57:42.452845  444203 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:57:42.452922  444203 notify.go:221] Checking for updates...
	I1212 20:57:42.458690  444203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:57:42.461546  444203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:42.464549  444203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:57:42.467438  444203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:57:42.470311  444203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:57:42.473663  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:42.473791  444203 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:57:42.502175  444203 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:57:42.502305  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.567154  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.556873235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.567281  444203 docker.go:319] overlay module found
	I1212 20:57:42.570683  444203 out.go:179] * Using the docker driver based on existing profile
	I1212 20:57:42.573609  444203 start.go:309] selected driver: docker
	I1212 20:57:42.573638  444203 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:f
alse ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.573801  444203 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:57:42.573920  444203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:57:42.631794  444203 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 20:57:42.621825898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:57:42.632218  444203 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 20:57:42.632254  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:42.632316  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:42.632425  444203 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:42.635654  444203 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 20:57:42.638374  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:42.641273  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:42.644097  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:42.644143  444203 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 20:57:42.644156  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:42.644194  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:42.644262  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:42.644272  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:42.644440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.664350  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:42.664409  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:42.664432  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:42.664465  444203 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:42.664534  444203 start.go:364] duration metric: took 45.473µs to acquireMachinesLock for "ha-008703"
	I1212 20:57:42.664558  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:42.664567  444203 fix.go:54] fixHost starting: 
	I1212 20:57:42.664830  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.682444  444203 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 20:57:42.682482  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:42.687702  444203 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 20:57:42.687806  444203 cli_runner.go:164] Run: docker start ha-008703
	I1212 20:57:42.929392  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:57:42.950691  444203 kic.go:430] container "ha-008703" state is running.
	I1212 20:57:42.951124  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:42.975911  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:42.976159  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:42.976233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:43.000950  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:43.001319  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:43.001348  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:43.002175  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:46.155930  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.155957  444203 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 20:57:46.156028  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.174281  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.174613  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.174631  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 20:57:46.334176  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 20:57:46.334256  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.353092  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.353419  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.353444  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:46.504764  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:46.504855  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:46.504906  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:46.504931  444203 provision.go:84] configureAuth start
	I1212 20:57:46.505018  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:46.522153  444203 provision.go:143] copyHostCerts
	I1212 20:57:46.522196  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522237  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:46.522245  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:46.522321  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:46.522414  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522431  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:46.522435  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:46.522464  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:46.522512  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522532  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:46.522536  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:46.522563  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:46.522618  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 20:57:46.651816  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:46.651886  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:46.651968  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.671188  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:46.776309  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:46.776386  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:46.794675  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:46.794741  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 20:57:46.813024  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:46.813085  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:46.830950  444203 provision.go:87] duration metric: took 325.983006ms to configureAuth
	I1212 20:57:46.830977  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:46.831235  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:46.831340  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:46.848478  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:46.848794  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33192 <nil> <nil>}
	I1212 20:57:46.848812  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:47.235920  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:47.235994  444203 machine.go:97] duration metric: took 4.259816851s to provisionDockerMachine
	I1212 20:57:47.236020  444203 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 20:57:47.236048  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:47.236157  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:47.236233  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.261608  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.368446  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:47.372121  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:47.372152  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:47.372170  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:47.372227  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:47.372309  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:47.372320  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:47.372447  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:47.380725  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:47.398959  444203 start.go:296] duration metric: took 162.907605ms for postStartSetup
	I1212 20:57:47.399064  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:47.399134  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.420756  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.525530  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:47.530321  444203 fix.go:56] duration metric: took 4.865746757s for fixHost
	I1212 20:57:47.530348  444203 start.go:83] releasing machines lock for "ha-008703", held for 4.865800567s
	I1212 20:57:47.530419  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:57:47.548629  444203 ssh_runner.go:195] Run: cat /version.json
	I1212 20:57:47.548688  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.548950  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:47.549003  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:57:47.573240  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.580519  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:57:47.676043  444203 ssh_runner.go:195] Run: systemctl --version
	I1212 20:57:47.771712  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:47.808898  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:47.813508  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:47.813590  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:47.821723  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:47.821748  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:47.821827  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:47.821894  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:47.837549  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:47.851337  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:47.851435  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:47.867827  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:47.881469  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:47.990806  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:48.117810  444203 docker.go:234] disabling docker service ...
	I1212 20:57:48.117891  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:48.133641  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:48.146962  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:48.263631  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:48.385870  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:48.400502  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:48.415928  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:48.415999  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.425436  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:48.425516  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.434622  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.443654  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.452998  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:48.462000  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.471517  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.480019  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:48.488892  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:48.501776  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:48.509429  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:48.636874  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:57:48.831677  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:57:48.831797  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:57:48.835749  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:57:48.835860  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:57:48.839496  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:57:48.865845  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:57:48.865936  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.896176  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:57:48.926063  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:57:48.928824  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:57:48.945819  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:57:48.949721  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:48.960274  444203 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 20:57:48.960470  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:48.960528  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:48.995177  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:48.995203  444203 crio.go:433] Images already preloaded, skipping extraction
	I1212 20:57:48.995261  444203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 20:57:49.022349  444203 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 20:57:49.022375  444203 cache_images.go:86] Images are preloaded, skipping loading
	I1212 20:57:49.022384  444203 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 20:57:49.022522  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:57:49.022613  444203 ssh_runner.go:195] Run: crio config
	I1212 20:57:49.094808  444203 cni.go:84] Creating CNI manager for ""
	I1212 20:57:49.094833  444203 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 20:57:49.094884  444203 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 20:57:49.094931  444203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 20:57:49.095072  444203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 20:57:49.095097  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:57:49.095151  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:57:49.107313  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:49.107428  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:57:49.107499  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:57:49.115345  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:57:49.115415  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 20:57:49.123505  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 20:57:49.136430  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:57:49.149479  444203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 20:57:49.163560  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:57:49.176571  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:57:49.180272  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:57:49.190686  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:49.306812  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:57:49.322473  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 20:57:49.322495  444203 certs.go:195] generating shared ca certs ...
	I1212 20:57:49.322510  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.322646  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:57:49.322706  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:57:49.322721  444203 certs.go:257] generating profile certs ...
	I1212 20:57:49.322803  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:57:49.322831  444203 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 20:57:49.322854  444203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I1212 20:57:49.472738  444203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 ...
	I1212 20:57:49.472774  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904: {Name:mk2a5379bc5668a2307c7e3ec981ab026dda45c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.472981  444203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 ...
	I1212 20:57:49.473001  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904: {Name:mk9431140de21966b13bcbc9ba3792a6b7192788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:49.473093  444203 certs.go:382] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt
	I1212 20:57:49.473241  444203 certs.go:386] copying /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904 -> /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key
	I1212 20:57:49.473382  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:57:49.473401  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:57:49.473419  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:57:49.473436  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:57:49.473449  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:57:49.473464  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:57:49.473478  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:57:49.473493  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:57:49.473504  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:57:49.473559  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:57:49.473598  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:57:49.473610  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:57:49.473644  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:57:49.473680  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:57:49.473711  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:57:49.473759  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:49.473803  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.473819  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.473830  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.474446  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:57:49.501229  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:57:49.522107  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:57:49.550223  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:57:49.582434  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:57:49.603191  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:57:49.623340  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:57:49.644021  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:57:49.665040  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:57:49.686900  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:57:49.705561  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:57:49.723829  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 20:57:49.737136  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:57:49.743571  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.751112  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:57:49.759507  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763360  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.763427  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:57:49.804630  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:57:49.811926  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.819270  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:57:49.826837  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830838  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.830912  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:57:49.872515  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:57:49.880250  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.887711  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:57:49.895442  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899072  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.899140  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:57:49.940560  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:57:49.948269  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:57:49.952111  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:57:49.994329  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:57:50.049087  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:57:50.098831  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:57:50.155411  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:57:50.252310  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:57:50.338780  444203 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:57:50.338978  444203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 20:57:50.339069  444203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 20:57:50.398773  444203 cri.go:89] found id: "8df671b2f67c1fea6933eed59bb0ed038b61ceb87afc6b29bfda67eb56bf94c5"
	I1212 20:57:50.398830  444203 cri.go:89] found id: "153af1b54c51e5f4602a99ee68deeb035520f031d6275c686a4d837adf8c7a9b"
	I1212 20:57:50.398851  444203 cri.go:89] found id: "afc1929ca6e740de8c3a64acc626b0e59ca06f13bd451285650a7214808d9608"
	I1212 20:57:50.398870  444203 cri.go:89] found id: "3df9e833b1b81ce05c8ed6dff7db997b5fe66bf67be14061cdbe13efd2dd87cf"
	I1212 20:57:50.398889  444203 cri.go:89] found id: "d1a55d9c86371ac0863607a8786cbe02fed629a5326460325861f8f7188e31b3"
	I1212 20:57:50.398924  444203 cri.go:89] found id: ""
	I1212 20:57:50.399008  444203 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 20:57:50.427490  444203 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:57:50Z" level=error msg="open /run/runc: no such file or directory"
	I1212 20:57:50.427620  444203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 20:57:50.436409  444203 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 20:57:50.436480  444203 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 20:57:50.436565  444203 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 20:57:50.450254  444203 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:57:50.450723  444203 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.450865  444203 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 20:57:50.451184  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.451770  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 20:57:50.452602  444203 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 20:57:50.452649  444203 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 20:57:50.452671  444203 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 20:57:50.452699  444203 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 20:57:50.452724  444203 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 20:57:50.452669  444203 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 20:57:50.453064  444203 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 20:57:50.471433  444203 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 20:57:50.471502  444203 kubeadm.go:602] duration metric: took 34.98508ms to restartPrimaryControlPlane
	I1212 20:57:50.471528  444203 kubeadm.go:403] duration metric: took 132.757161ms to StartCluster
	I1212 20:57:50.471560  444203 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.471649  444203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:57:50.472264  444203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:57:50.472602  444203 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:57:50.472664  444203 start.go:242] waiting for startup goroutines ...
	I1212 20:57:50.472701  444203 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 20:57:50.473166  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.478696  444203 out.go:179] * Enabled addons: 
	I1212 20:57:50.481795  444203 addons.go:530] duration metric: took 9.096965ms for enable addons: enabled=[]
	I1212 20:57:50.481888  444203 start.go:247] waiting for cluster config update ...
	I1212 20:57:50.481913  444203 start.go:256] writing updated cluster config ...
	I1212 20:57:50.485267  444203 out.go:203] 
	I1212 20:57:50.488653  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:50.488812  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.492075  444203 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 20:57:50.494987  444203 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:57:50.498206  444203 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:57:50.501052  444203 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:57:50.501113  444203 cache.go:65] Caching tarball of preloaded images
	I1212 20:57:50.501125  444203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:57:50.501268  444203 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 20:57:50.501295  444203 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 20:57:50.501440  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:50.539828  444203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 20:57:50.539849  444203 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 20:57:50.539866  444203 cache.go:243] Successfully downloaded all kic artifacts
	I1212 20:57:50.539902  444203 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 20:57:50.539970  444203 start.go:364] duration metric: took 48.32µs to acquireMachinesLock for "ha-008703-m02"
	I1212 20:57:50.539997  444203 start.go:96] Skipping create...Using existing machine configuration
	I1212 20:57:50.540008  444203 fix.go:54] fixHost starting: m02
	I1212 20:57:50.540289  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:50.570630  444203 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 20:57:50.570662  444203 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 20:57:50.573920  444203 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 20:57:50.574010  444203 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 20:57:51.021435  444203 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:57:51.051445  444203 kic.go:430] container "ha-008703-m02" state is running.
	I1212 20:57:51.051835  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:51.081868  444203 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 20:57:51.082129  444203 machine.go:94] provisionDockerMachine start ...
	I1212 20:57:51.082189  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:51.114065  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:51.114398  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:51.114407  444203 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 20:57:51.115163  444203 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 20:57:54.335915  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.335981  444203 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 20:57:54.336094  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.365312  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.365660  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.365676  444203 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 20:57:54.750173  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 20:57:54.750344  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:54.784610  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:54.784933  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:54.784950  444203 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 20:57:55.052390  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 20:57:55.052421  444203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 20:57:55.052440  444203 ubuntu.go:190] setting up certificates
	I1212 20:57:55.052459  444203 provision.go:84] configureAuth start
	I1212 20:57:55.052553  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:55.106212  444203 provision.go:143] copyHostCerts
	I1212 20:57:55.106261  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106295  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 20:57:55.106307  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 20:57:55.106385  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 20:57:55.106475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106498  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 20:57:55.106503  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 20:57:55.106533  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 20:57:55.106577  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106598  444203 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 20:57:55.106605  444203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 20:57:55.106631  444203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 20:57:55.106681  444203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 20:57:55.315977  444203 provision.go:177] copyRemoteCerts
	I1212 20:57:55.316047  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 20:57:55.316093  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.334254  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:55.478383  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 20:57:55.478451  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 20:57:55.517393  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 20:57:55.517463  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 20:57:55.542182  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 20:57:55.542251  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 20:57:55.571975  444203 provision.go:87] duration metric: took 519.496148ms to configureAuth
	I1212 20:57:55.572013  444203 ubuntu.go:206] setting minikube options for container-runtime
	I1212 20:57:55.572281  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:57:55.572439  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:55.601112  444203 main.go:143] libmachine: Using SSH client type: native
	I1212 20:57:55.601422  444203 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33197 <nil> <nil>}
	I1212 20:57:55.601436  444203 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 20:57:56.060871  444203 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 20:57:56.060947  444203 machine.go:97] duration metric: took 4.978806446s to provisionDockerMachine
	I1212 20:57:56.060977  444203 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 20:57:56.061019  444203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 20:57:56.061131  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 20:57:56.061204  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.079622  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.188393  444203 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 20:57:56.191735  444203 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 20:57:56.191761  444203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 20:57:56.191773  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 20:57:56.191830  444203 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 20:57:56.191915  444203 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 20:57:56.191925  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 20:57:56.192023  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 20:57:56.199559  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:57:56.216617  444203 start.go:296] duration metric: took 155.610404ms for postStartSetup
	I1212 20:57:56.216698  444203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:57:56.216740  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.233309  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.337931  444203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 20:57:56.342965  444203 fix.go:56] duration metric: took 5.802950492s for fixHost
	I1212 20:57:56.342991  444203 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.803007207s
	I1212 20:57:56.343061  444203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 20:57:56.364818  444203 out.go:179] * Found network options:
	I1212 20:57:56.367652  444203 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 20:57:56.370401  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 20:57:56.370443  444203 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 20:57:56.370511  444203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 20:57:56.370552  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.370593  444203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 20:57:56.370646  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 20:57:56.391626  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.398057  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33197 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 20:57:56.575929  444203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 20:57:56.710881  444203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 20:57:56.710966  444203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 20:57:56.722145  444203 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 20:57:56.722217  444203 start.go:496] detecting cgroup driver to use...
	I1212 20:57:56.722266  444203 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 20:57:56.722342  444203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 20:57:56.742981  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 20:57:56.765595  444203 docker.go:218] disabling cri-docker service (if available) ...
	I1212 20:57:56.765706  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 20:57:56.793166  444203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 20:57:56.814044  444203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 20:57:57.024630  444203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 20:57:57.240003  444203 docker.go:234] disabling docker service ...
	I1212 20:57:57.240088  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 20:57:57.260709  444203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 20:57:57.276845  444203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 20:57:57.490011  444203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 20:57:57.701011  444203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 20:57:57.718231  444203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 20:57:57.734672  444203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 20:57:57.734758  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.752791  444203 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 20:57:57.752868  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.767185  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.783487  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.798836  444203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 20:57:57.808080  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.821261  444203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.835565  444203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 20:57:57.848412  444203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 20:57:57.861550  444203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 20:57:57.870875  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:57:58.097322  444203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 20:59:28.418240  444203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1m30.320839757s)
	I1212 20:59:28.418266  444203 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 20:59:28.418318  444203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 20:59:28.421907  444203 start.go:564] Will wait 60s for crictl version
	I1212 20:59:28.421970  444203 ssh_runner.go:195] Run: which crictl
	I1212 20:59:28.425474  444203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 20:59:28.451137  444203 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 20:59:28.451224  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.487374  444203 ssh_runner.go:195] Run: crio --version
	I1212 20:59:28.523846  444203 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 20:59:28.527097  444203 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 20:59:28.530093  444203 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 20:59:28.546578  444203 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 20:59:28.550700  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:28.561522  444203 mustload.go:66] Loading cluster: ha-008703
	I1212 20:59:28.561768  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:28.562034  444203 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:59:28.579699  444203 host.go:66] Checking if "ha-008703" exists ...
	I1212 20:59:28.579981  444203 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 20:59:28.579989  444203 certs.go:195] generating shared ca certs ...
	I1212 20:59:28.580003  444203 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:59:28.580127  444203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 20:59:28.580165  444203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 20:59:28.580173  444203 certs.go:257] generating profile certs ...
	I1212 20:59:28.580247  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 20:59:28.580315  444203 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 20:59:28.580355  444203 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 20:59:28.580363  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 20:59:28.580407  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 20:59:28.580418  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 20:59:28.580430  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 20:59:28.580441  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 20:59:28.580452  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 20:59:28.580465  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 20:59:28.580475  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 20:59:28.580526  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 20:59:28.580557  444203 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 20:59:28.580565  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 20:59:28.580591  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 20:59:28.580614  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 20:59:28.580640  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 20:59:28.580684  444203 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 20:59:28.580713  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 20:59:28.580727  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 20:59:28.580738  444203 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:28.580791  444203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:59:28.597816  444203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33192 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:59:28.696708  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 20:59:28.700659  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 20:59:28.709283  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 20:59:28.713481  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 20:59:28.721707  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 20:59:28.725369  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 20:59:28.733654  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 20:59:28.737443  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 20:59:28.745834  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 20:59:28.749617  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 20:59:28.758164  444203 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 20:59:28.761831  444203 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 20:59:28.770067  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 20:59:28.787610  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 20:59:28.806372  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 20:59:28.824957  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 20:59:28.844568  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 20:59:28.863238  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 20:59:28.881382  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 20:59:28.900337  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 20:59:28.919403  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 20:59:28.938551  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 20:59:28.958859  444203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 20:59:28.977347  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 20:59:28.998600  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 20:59:29.014406  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 20:59:29.027571  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 20:59:29.040968  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 20:59:29.054581  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 20:59:29.067754  444203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 20:59:29.080811  444203 ssh_runner.go:195] Run: openssl version
	I1212 20:59:29.087180  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.095114  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 20:59:29.102755  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106745  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.106853  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 20:59:29.152715  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 20:59:29.160933  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.168533  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 20:59:29.177095  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181103  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.181174  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 20:59:29.222399  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 20:59:29.233819  444203 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.241844  444203 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 20:59:29.249788  444203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254119  444203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.254190  444203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 20:59:29.295461  444203 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 20:59:29.303146  444203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 20:59:29.307067  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 20:59:29.350787  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 20:59:29.392520  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 20:59:29.433715  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 20:59:29.474688  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 20:59:29.516288  444203 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 20:59:29.557959  444203 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 20:59:29.558056  444203 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 20:59:29.558087  444203 kube-vip.go:115] generating kube-vip config ...
	I1212 20:59:29.558148  444203 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 20:59:29.572235  444203 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 20:59:29.572334  444203 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 20:59:29.572441  444203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 20:59:29.580681  444203 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 20:59:29.580751  444203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 20:59:29.588356  444203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 20:59:29.602149  444203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 20:59:29.615313  444203 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 20:59:29.629715  444203 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 20:59:29.633469  444203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 20:59:29.643261  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.776061  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.790278  444203 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 20:59:29.790703  444203 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:59:29.794669  444203 out.go:179] * Verifying Kubernetes components...
	I1212 20:59:29.797306  444203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 20:59:29.936519  444203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 20:59:29.950752  444203 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 20:59:29.950831  444203 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 20:59:29.952083  444203 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 20:59:31.953427  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:33.953536  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:36.453558  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:38.952703  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:41.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:43.452691  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:45.952750  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:48.452655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 20:59:50.452746  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:00.954217  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:00:10.954802  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:00:12.960855  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:00:12.961321  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:58400->192.168.49.2:8443: read: connection reset by peer
	W1212 21:00:15.453549  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:17.952657  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:20.453573  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:22.952730  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:24.953541  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:27.452882  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:29.952571  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:32.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:34.953509  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:37.452853  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:39.953131  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:41.953378  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:44.452656  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:46.952721  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:48.952858  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:51.452609  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:53.452824  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:55.952717  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:00:57.953626  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:08.953781  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:01:18.955065  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:01:20.812078  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:01:21.453435  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:23.952633  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:25.953670  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:28.453604  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:30.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:32.953585  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:34.953661  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:37.452751  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:39.952713  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:42.452830  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:44.952768  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:47.452920  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:49.952685  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:51.953605  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:54.452622  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:56.453648  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:01:58.952804  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:01.452588  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:03.452926  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:05.952702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:07.952958  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:09.953705  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:12.452917  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:14.952877  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:17.452818  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:19.952741  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:22.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:24.952709  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:27.452855  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:29.952655  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:31.952748  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:33.952822  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:36.452695  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:38.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:40.452868  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:42.952779  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:44.952905  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:47.453071  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:49.453482  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:02:59.953684  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	W1212 21:03:09.954306  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:03:12.758807  444203 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	W1212 21:03:12.759288  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.49.1:51776->192.168.49.2:8443: read: connection reset by peer
	W1212 21:03:14.952704  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:17.452843  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:19.952792  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:22.452752  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:24.952700  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:27.452954  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:29.952844  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:32.452952  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:34.953666  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:37.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:39.952664  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:41.952726  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:43.952797  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:45.952870  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:48.452683  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:50.453535  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:52.952772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:55.452788  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:57.452867  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:03:59.952895  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:02.452860  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:04.952915  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:07.452753  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:09.453637  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:11.952833  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:14.452718  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:16.952636  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:18.953630  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:21.452687  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:23.952770  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:25.952829  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:28.452772  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:30.453677  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:32.952813  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:35.452679  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:37.453048  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:39.453453  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:41.952806  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:44.452710  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:46.952744  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:48.952846  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:51.452675  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:53.452999  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:55.952801  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:04:58.452747  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:00.952662  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:02.952760  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:05.452732  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:07.452887  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:09.952790  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:11.953431  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:14.452702  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:16.952708  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:19.452740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:21.453565  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:23.953569  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:25.953740  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	W1212 21:05:28.452736  444203 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": dial tcp 192.168.49.2:8443: connect: connection refused
	I1212 21:05:29.952402  444203 node_ready.go:38] duration metric: took 6m0.000280641s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:05:29.955795  444203 out.go:203] 
	W1212 21:05:29.958921  444203 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1212 21:05:29.958945  444203 out.go:285] * 
	W1212 21:05:29.961096  444203 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:05:29.963919  444203 out.go:203] 
	
	
	==> CRI-O <==
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.573353501Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.34.2" id=e336470e-972a-4f5a-994c-a420cec7e1fd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.57546657Z" level=info msg="Creating container: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=d075afca-5b01-4a72-af26-095e4c3fda98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.575597509Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.580854253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.581359382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.601253977Z" level=info msg="Created container cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=d075afca-5b01-4a72-af26-095e4c3fda98 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.602097012Z" level=info msg="Starting container: cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f" id=dbce5b8b-662d-490b-9c7e-6322afe66b97 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:02:49 ha-008703 crio[664]: time="2025-12-12T21:02:49.604098646Z" level=info msg="Started container" PID=1222 containerID=cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f description=kube-system/kube-apiserver-ha-008703/kube-apiserver id=dbce5b8b-662d-490b-9c7e-6322afe66b97 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dd00fe9660f8414338311e9c84221931557aa6e52742b6d1c070584ba8d05455
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.567811041Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=06123f68-d5a0-4e2d-b7b6-01920744fc92 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.569304002Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.34.2" id=7187c520-d59c-408f-86c9-0f55666a4f1d name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.570672884Z" level=info msg="Creating container: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=d5360af4-8fe4-4c01-bc64-14e0357b8194 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.570785361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.57666159Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.577156478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.596206938Z" level=info msg="Created container f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=d5360af4-8fe4-4c01-bc64-14e0357b8194 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.597045264Z" level=info msg="Starting container: f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd" id=ef0dba9e-4ceb-40c6-83a8-0ba642cb308d name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:02:59 ha-008703 crio[664]: time="2025-12-12T21:02:59.598929227Z" level=info msg="Started container" PID=1236 containerID=f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd description=kube-system/kube-controller-manager-ha-008703/kube-controller-manager id=ef0dba9e-4ceb-40c6-83a8-0ba642cb308d name=/runtime.v1.RuntimeService/StartContainer sandboxID=85be12a014baa67b64e07a5bfb74b282216901ce9944cc92b4cfb2a168b1bf90
	Dec 12 21:03:11 ha-008703 conmon[1219]: conmon cf99f099390ca3b31b52 <ninfo>: container 1222 exited with status 255
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.427216445Z" level=info msg="Removing container: cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.437307235Z" level=info msg="Error loading conmon cgroup of container cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a: cgroup deleted" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:12 ha-008703 crio[664]: time="2025-12-12T21:03:12.440793584Z" level=info msg="Removed container cf48088caa4cfb42f93d49a1c1e5a462244bc1e12ac0abbb057d0607ebc9e44a: kube-system/kube-apiserver-ha-008703/kube-apiserver" id=b12c6d3e-0600-43bb-900e-f0c271e39ed8 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:21 ha-008703 conmon[1233]: conmon f56a6db74f42e64847c6 <ninfo>: container 1236 exited with status 1
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.453363026Z" level=info msg="Removing container: a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.462475363Z" level=info msg="Error loading conmon cgroup of container a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279: cgroup deleted" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 12 21:03:22 ha-008703 crio[664]: time="2025-12-12T21:03:22.46560056Z" level=info msg="Removed container a1895ad524a296033df01c087a54664f80531ed33e6a1a8194edb5080ed07279: kube-system/kube-controller-manager-ha-008703/kube-controller-manager" id=c40a9037-66b6-4437-ba74-8a8cb6373f0f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	f56a6db74f42e       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   2 minutes ago       Exited              kube-controller-manager   6                   85be12a014baa       kube-controller-manager-ha-008703   kube-system
	cf99f099390ca       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   2 minutes ago       Exited              kube-apiserver            6                   dd00fe9660f84       kube-apiserver-ha-008703            kube-system
	dec4a7f43553c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   3 minutes ago       Running             etcd                      2                   aacc080aed809       etcd-ha-008703                      kube-system
	8df671b2f67c1       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   7 minutes ago       Running             kube-vip                  0                   b6145737bcabc       kube-vip-ha-008703                  kube-system
	afc1929ca6e74       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   7 minutes ago       Running             kube-scheduler            1                   1b70e5a4174e6       kube-scheduler-ha-008703            kube-system
	d1a55d9c86371       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   7 minutes ago       Exited              etcd                      1                   aacc080aed809       etcd-ha-008703                      kube-system
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	[Dec12 20:52] overlayfs: idmapped layers are currently not supported
	[ +33.094252] overlayfs: idmapped layers are currently not supported
	[Dec12 20:53] overlayfs: idmapped layers are currently not supported
	[Dec12 20:55] overlayfs: idmapped layers are currently not supported
	[Dec12 20:56] overlayfs: idmapped layers are currently not supported
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [d1a55d9c86371ac0863607a8786cbe02fed629a5326460325861f8f7188e31b3] <==
	{"level":"warn","ts":"2025-12-12T21:01:55.687759Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703030Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703048Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:01:55.703082Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: i/o timeout"}
	{"level":"info","ts":"2025-12-12T21:01:55.709338Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709404Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709432Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709445Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709498Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:55.709515Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:01:55.996239Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:56.496433Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:56.997590Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:01:57.498752Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939226242106,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-12T21:01:57.609514Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609568Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609589Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609600Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609635Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:01:57.609646Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:01:57.683100Z","caller":"etcdserver/server.go:1830","msg":"failed to publish local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-008703 ClientURLs:[https://192.168.49.2:2379]}","publish-timeout":"7s","error":"context deadline exceeded"}
	{"level":"warn","ts":"2025-12-12T21:01:57.990505Z","caller":"etcdserver/v3_server.go:923","msg":"timed out waiting for read index response (local node might have slow network)","timeout":"7s"}
	{"level":"warn","ts":"2025-12-12T21:01:57.990631Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"7.000611314s","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"","error":"etcdserver: request timed out"}
	{"level":"info","ts":"2025-12-12T21:01:57.990677Z","caller":"traceutil/trace.go:172","msg":"trace[1992001531] range","detail":"{range_begin:; range_end:; }","duration":"7.000675488s","start":"2025-12-12T21:01:50.989989Z","end":"2025-12-12T21:01:57.990665Z","steps":["trace[1992001531] 'agreement among raft nodes before linearized reading'  (duration: 7.000604562s)"],"step_count":1}
	{"level":"error","ts":"2025-12-12T21:01:57.990777Z","caller":"etcdhttp/health.go:345","msg":"Health check error","path":"/readyz","reason":"[-]linearizable_read failed: etcdserver: request timed out\n[+]non_learner ok\n[+]data_corruption ok\n[+]serializable_read ok\n","status-code":503,"stacktrace":"go.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp.(*CheckRegistry).installRootHTTPEndpoint.newHealthHandler.func2\n\tgo.etcd.io/etcd/server/v3/etcdserver/api/etcdhttp/health.go:345\nnet/http.HandlerFunc.ServeHTTP\n\tnet/http/server.go:2294\nnet/http.(*ServeMux).ServeHTTP\n\tnet/http/server.go:2822\nnet/http.serverHandler.ServeHTTP\n\tnet/http/server.go:3301\nnet/http.(*conn).serve\n\tnet/http/server.go:2102"}
	
	
	==> etcd [dec4a7f43553c1db233f4e5d7706cfb990da47b7ae97783a399590896902caa9] <==
	{"level":"info","ts":"2025-12-12T21:05:32.074866Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:32.492565Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:32.992753Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222191Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222244Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222210Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"dbbac03088fbc00a","rtt":"0s","error":"dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:05:33.222273Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"info","ts":"2025-12-12T21:05:33.474917Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.474966Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.474986Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.474998Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.475033Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:33.475045Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:33.493067Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:33.994206Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:34.495257Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"info","ts":"2025-12-12T21:05:34.875173Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:34.875247Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:34.875268Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to 61bc3757651ee949 at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:34.875301Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1064","msg":"aec36adc501070cc [logterm: 3, index: 2081] sent MsgPreVote request to dbbac03088fbc00a at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:34.875335Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2025-12-12T21:05:34.875345Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"aec36adc501070cc has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"warn","ts":"2025-12-12T21:05:34.996086Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:35.497040Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2025-12-12T21:05:35.997343Z","caller":"etcdserver/v3_server.go:911","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128041939289644595,"retry-timeout":"500ms"}
	
	
	==> kernel <==
	 21:05:36 up  3:48,  0 user,  load average: 0.18, 0.52, 0.76
	Linux ha-008703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kube-apiserver [cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f] <==
	I1212 21:02:49.657910       1 server.go:150] Version: v1.34.2
	I1212 21:02:49.657951       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1212 21:02:51.709271       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1212 21:02:51.709302       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1212 21:02:51.709311       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1212 21:02:51.709316       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1212 21:02:51.709320       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1212 21:02:51.709325       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1212 21:02:51.709330       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1212 21:02:51.709335       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1212 21:02:51.709339       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1212 21:02:51.709343       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1212 21:02:51.709348       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1212 21:02:51.709352       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1212 21:02:51.728675       1 logging.go:55] [core] [Channel #4 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1212 21:02:51.730483       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	W1212 21:02:51.730660       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 21:02:51.741349       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:02:51.747924       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1212 21:02:51.748039       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1212 21:02:51.748302       1 instance.go:239] Using reconciler: lease
	W1212 21:02:51.749591       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:03:11.724871       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 21:03:11.728050       1 logging.go:55] [core] [Channel #4 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1212 21:03:11.750018       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [f56a6db74f42e64847c62c4c24251ccc7b701ff189b505102a9a1aa2e1db06fd] <==
	I1212 21:03:00.877789       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:03:01.516406       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1212 21:03:01.516437       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:03:01.517922       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 21:03:01.518179       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1212 21:03:01.518334       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 21:03:01.518416       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 21:03:21.521653       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-scheduler [afc1929ca6e740de8c3a64acc626b0e59ca06f13bd451285650a7214808d9608] <==
	E1212 21:04:32.550798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:04:37.379816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:04:48.233878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 21:04:49.508637       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:04:49.753068       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:04:51.098987       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:04:51.493097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:04:52.085286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1212 21:04:54.810851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:04:56.542675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:04:59.479194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:05:01.280059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 21:05:03.231202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:05:05.294802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:05:06.646919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:05:07.513445       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 21:05:11.215768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:05:14.637981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:05:14.678743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:05:18.402248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:05:21.622956       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:05:22.967707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:05:27.893192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:05:30.607858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:05:30.694995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8443/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	
	
	==> kubelet <==
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.038976     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.139649     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.240689     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.341674     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.442649     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.543421     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.644641     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.745340     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.845936     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:34 ha-008703 kubelet[802]: E1212 21:05:34.947161     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.048120     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.149462     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.249938     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.350992     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.451622     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.553108     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.566841     802 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ha-008703\" not found" node="ha-008703"
	Dec 12 21:05:35 ha-008703 kubelet[802]: I1212 21:05:35.566939     802 scope.go:117] "RemoveContainer" containerID="cf99f099390ca3b31b52598336e7181020c89586a8038d0c048d3d9fc813479f"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.567072     802 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-008703_kube-system(049929185cd47c814959942fd98ffb98)\"" pod="kube-system/kube-apiserver-ha-008703" podUID="049929185cd47c814959942fd98ffb98"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.653998     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.754702     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.855301     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:35 ha-008703 kubelet[802]: E1212 21:05:35.956671     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:36 ha-008703 kubelet[802]: E1212 21:05:36.057621     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	Dec 12 21:05:36 ha-008703 kubelet[802]: E1212 21:05:36.158786     802 reconstruct.go:189] "Failed to get Node status to reconstruct device paths" err="Get \"https://192.168.49.2:8443/api/v1/nodes/ha-008703\": dial tcp 192.168.49.2:8443: connect: connection refused"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703: exit status 2 (332.27185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "ha-008703" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 stop --alsologtostderr -v 5: (2.685781298s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5: exit status 7 (135.423219ms)

                                                
                                                
-- stdout --
	ha-008703
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-008703-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-008703-m03
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-008703-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:05:39.379391  449129 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:05:39.379509  449129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.379519  449129 out.go:374] Setting ErrFile to fd 2...
	I1212 21:05:39.379524  449129 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.379780  449129 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:05:39.379970  449129 out.go:368] Setting JSON to false
	I1212 21:05:39.380011  449129 mustload.go:66] Loading cluster: ha-008703
	I1212 21:05:39.380085  449129 notify.go:221] Checking for updates...
	I1212 21:05:39.381376  449129 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:39.381463  449129 status.go:174] checking status of ha-008703 ...
	I1212 21:05:39.382121  449129 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:39.404084  449129 status.go:371] ha-008703 host status = "Stopped" (err=<nil>)
	I1212 21:05:39.404109  449129 status.go:384] host is not running, skipping remaining checks
	I1212 21:05:39.404117  449129 status.go:176] ha-008703 status: &{Name:ha-008703 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:05:39.404149  449129 status.go:174] checking status of ha-008703-m02 ...
	I1212 21:05:39.404473  449129 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:39.428493  449129 status.go:371] ha-008703-m02 host status = "Stopped" (err=<nil>)
	I1212 21:05:39.428583  449129 status.go:384] host is not running, skipping remaining checks
	I1212 21:05:39.428608  449129 status.go:176] ha-008703-m02 status: &{Name:ha-008703-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:05:39.428655  449129 status.go:174] checking status of ha-008703-m03 ...
	I1212 21:05:39.429091  449129 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:05:39.446465  449129 status.go:371] ha-008703-m03 host status = "Stopped" (err=<nil>)
	I1212 21:05:39.446489  449129 status.go:384] host is not running, skipping remaining checks
	I1212 21:05:39.446498  449129 status.go:176] ha-008703-m03 status: &{Name:ha-008703-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:05:39.446519  449129 status.go:174] checking status of ha-008703-m04 ...
	I1212 21:05:39.446838  449129 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:05:39.463806  449129 status.go:371] ha-008703-m04 host status = "Stopped" (err=<nil>)
	I1212 21:05:39.463831  449129 status.go:384] host is not running, skipping remaining checks
	I1212 21:05:39.463838  449129 status.go:176] ha-008703-m04 status: &{Name:ha-008703-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m03
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-008703-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2025-12-12T20:57:42.720187316Z",
	            "FinishedAt": "2025-12-12T21:05:38.645326548Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703: exit status 7 (75.252843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 7 (may be ok)
helpers_test.go:250: "ha-008703" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m30.550558957s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5: (1.15506067s)
ha_test.go:573: status says not two control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:576: status says not three hosts are running: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:579: status says not three kubelets are running: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:582: status says not two apiservers are running: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
ha_test.go:599: expected 3 nodes Ready status to be True, got 
-- stdout --
	' True
	 True
	 True
	 True
	'

                                                
                                                
-- /stdout --
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449316,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:05:39.880681825Z",
	            "FinishedAt": "2025-12-12T21:05:38.645326548Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56820d5d7e78ec2f02da47e339541c9ef651db5d532d64770a21ce2bbb5446a4",
	            "SandboxKey": "/var/run/docker/netns/56820d5d7e78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:e7:89:49:21:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "3c6a3818203b2804ed1a97d15e01e57b58ac1b4d017d616dc02dd9125b0a0f3c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:253: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 logs -n 25: (2.242422755s)
helpers_test.go:261: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt                                                            │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt                                                │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node start m02 --alsologtostderr -v 5                                                                                     │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:57 UTC │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │ 12 Dec 25 20:57 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5                                                                                  │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ node    │ ha-008703 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:05 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:05:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:05:39.605178  449185 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:05:39.605402  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605430  449185 out.go:374] Setting ErrFile to fd 2...
	I1212 21:05:39.605450  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605864  449185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:05:39.606369  449185 out.go:368] Setting JSON to false
	I1212 21:05:39.607946  449185 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13692,"bootTime":1765559848,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:05:39.608060  449185 start.go:143] virtualization:  
	I1212 21:05:39.611335  449185 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:05:39.615242  449185 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:05:39.615314  449185 notify.go:221] Checking for updates...
	I1212 21:05:39.621077  449185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:05:39.623949  449185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:39.626804  449185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:05:39.629715  449185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:05:39.632603  449185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:05:39.635954  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:39.636566  449185 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:05:39.669276  449185 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:05:39.669398  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.732289  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.722148611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.732454  449185 docker.go:319] overlay module found
	I1212 21:05:39.735677  449185 out.go:179] * Using the docker driver based on existing profile
	I1212 21:05:39.738449  449185 start.go:309] selected driver: docker
	I1212 21:05:39.738468  449185 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.738617  449185 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:05:39.738715  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.793928  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.784653162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.794497  449185 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:05:39.794535  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:39.794590  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:39.794655  449185 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.797771  449185 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 21:05:39.800532  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:39.803460  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:39.806386  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:39.806435  449185 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 21:05:39.806449  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:39.806468  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:39.806557  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:39.806568  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:39.806736  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:39.826241  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:39.826266  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:39.826283  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:39.826317  449185 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:39.826376  449185 start.go:364] duration metric: took 38.285µs to acquireMachinesLock for "ha-008703"
	I1212 21:05:39.826401  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:39.826407  449185 fix.go:54] fixHost starting: 
	I1212 21:05:39.826688  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:39.844490  449185 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 21:05:39.844521  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:39.847711  449185 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 21:05:39.847788  449185 cli_runner.go:164] Run: docker start ha-008703
	I1212 21:05:40.139310  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:40.163240  449185 kic.go:430] container "ha-008703" state is running.
	I1212 21:05:40.163662  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:40.191201  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:40.191459  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:40.191534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:40.219354  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:40.219684  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:40.219693  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:40.220585  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:05:43.371942  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.371968  449185 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 21:05:43.372054  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.389586  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.389913  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.389930  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 21:05:43.553625  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.553711  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.571751  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.572079  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.572102  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:43.724831  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:43.724856  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:43.724884  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:43.724903  449185 provision.go:84] configureAuth start
	I1212 21:05:43.724977  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:43.743377  449185 provision.go:143] copyHostCerts
	I1212 21:05:43.743421  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743463  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:43.743471  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743550  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:43.743646  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743662  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:43.743667  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743692  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:43.743751  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743767  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:43.743771  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743797  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:43.743859  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 21:05:43.832472  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:43.832541  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:43.832590  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.850299  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:43.956285  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:43.956420  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:43.974303  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:43.974381  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 21:05:43.992649  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:43.992714  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:05:44.013810  449185 provision.go:87] duration metric: took 288.892734ms to configureAuth
	I1212 21:05:44.013838  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:44.014088  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:44.014212  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.036649  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:44.037017  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:44.037041  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:44.386038  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:44.386060  449185 machine.go:97] duration metric: took 4.194590859s to provisionDockerMachine
	I1212 21:05:44.386072  449185 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 21:05:44.386084  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:44.386193  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:44.386264  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.403386  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.508670  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:44.512195  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:44.512221  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:44.512236  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:44.512291  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:44.512398  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:44.512408  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:44.512511  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:44.520678  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:44.539590  449185 start.go:296] duration metric: took 153.501859ms for postStartSetup
	I1212 21:05:44.539670  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:44.539734  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.557736  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.661664  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:44.666383  449185 fix.go:56] duration metric: took 4.839968923s for fixHost
	I1212 21:05:44.666409  449185 start.go:83] releasing machines lock for "ha-008703", held for 4.840020362s
	I1212 21:05:44.666477  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:44.684762  449185 ssh_runner.go:195] Run: cat /version.json
	I1212 21:05:44.684817  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.685079  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:44.685134  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.708523  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.712753  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.904198  449185 ssh_runner.go:195] Run: systemctl --version
	I1212 21:05:44.910603  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:44.946561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:44.951022  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:44.951140  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:44.959060  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:44.959085  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:44.959118  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:44.959164  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:44.974739  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:44.987642  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:44.987758  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:45.005197  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:45.023356  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:45.187771  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:45.360312  449185 docker.go:234] disabling docker service ...
	I1212 21:05:45.360416  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:45.382556  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:45.397072  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:45.515232  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:45.630674  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:45.644319  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:45.659761  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:45.659839  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.669217  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:45.669329  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.678932  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.691100  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.701211  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:45.710201  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.720671  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.729634  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.739187  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:45.747460  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:45.755441  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:45.880049  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:46.064833  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:46.064907  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:46.068969  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:46.069037  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:46.072837  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:46.098607  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:46.098708  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.128236  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.158573  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:46.161391  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:46.178132  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:46.181932  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.192021  449185 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:05:46.192177  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:46.192251  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.227916  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.227942  449185 crio.go:433] Images already preloaded, skipping extraction
	I1212 21:05:46.227998  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.253605  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.253629  449185 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:05:46.253638  449185 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 21:05:46.253742  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:46.253823  449185 ssh_runner.go:195] Run: crio config
	I1212 21:05:46.327816  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:46.327839  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:46.327863  449185 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:05:46.327893  449185 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:05:46.328051  449185 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:05:46.328077  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:46.328142  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:46.341034  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:46.341215  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:46.341284  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:46.349457  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:46.349531  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 21:05:46.357340  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 21:05:46.371153  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:46.384332  449185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 21:05:46.397565  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:46.411895  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:46.415692  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.426113  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:46.540637  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:46.557178  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 21:05:46.557202  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:46.557219  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:46.557365  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:46.557420  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:46.557434  449185 certs.go:257] generating profile certs ...
	I1212 21:05:46.557525  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:46.557600  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 21:05:46.557649  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:46.557662  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:46.557674  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:46.557688  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:46.557703  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:46.557714  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:46.557731  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:46.557752  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:46.557770  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:46.557824  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:46.557861  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:46.557873  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:46.557901  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:46.557930  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:46.557955  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:46.558003  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:46.558037  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.558052  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.558066  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.558628  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:46.581904  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:46.602655  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:46.623772  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:46.644667  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:46.670849  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:46.690125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:46.719167  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:46.743203  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:46.764296  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:46.788880  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:46.807678  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:05:46.822196  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:46.829401  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.838655  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:46.847305  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851571  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851686  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.894892  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:46.903217  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.911071  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:46.919222  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923110  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923186  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.964916  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:46.972957  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.980730  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:46.989130  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993540  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993610  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:47.036478  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:47.044309  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:47.048593  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:47.091048  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:47.132635  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:47.184472  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:47.233316  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:47.289483  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:47.363953  449185 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:47.364111  449185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:05:47.364177  449185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:05:47.424432  449185 cri.go:89] found id: "05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b"
	I1212 21:05:47.424457  449185 cri.go:89] found id: "6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f"
	I1212 21:05:47.424463  449185 cri.go:89] found id: "62a05b797d32258dc4368afc3978a5b3f463b4eafed6049189130af79138e299"
	I1212 21:05:47.424466  449185 cri.go:89] found id: "03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	I1212 21:05:47.424469  449185 cri.go:89] found id: "e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2"
	I1212 21:05:47.424473  449185 cri.go:89] found id: ""
	I1212 21:05:47.424525  449185 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 21:05:47.441549  449185 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:05:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:05:47.441640  449185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:05:47.453706  449185 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:05:47.453729  449185 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:05:47.453787  449185 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:05:47.466638  449185 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:47.467064  449185 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.467171  449185 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 21:05:47.467570  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.468100  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:05:47.468627  449185 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:05:47.468649  449185 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:05:47.468655  449185 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:05:47.468661  449185 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:05:47.468665  449185 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:05:47.468983  449185 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:05:47.469097  449185 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 21:05:47.477581  449185 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 21:05:47.477605  449185 kubeadm.go:602] duration metric: took 23.869575ms to restartPrimaryControlPlane
	I1212 21:05:47.477614  449185 kubeadm.go:403] duration metric: took 113.6735ms to StartCluster
	I1212 21:05:47.477631  449185 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.477689  449185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.478278  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.478485  449185 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:47.478512  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:05:47.478526  449185 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:05:47.479081  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.484597  449185 out.go:179] * Enabled addons: 
	I1212 21:05:47.487542  449185 addons.go:530] duration metric: took 9.010305ms for enable addons: enabled=[]
	I1212 21:05:47.487605  449185 start.go:247] waiting for cluster config update ...
	I1212 21:05:47.487614  449185 start.go:256] writing updated cluster config ...
	I1212 21:05:47.491098  449185 out.go:203] 
	I1212 21:05:47.494772  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.494914  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.498660  449185 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 21:05:47.501545  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:47.504535  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:47.507691  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:47.507726  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:47.507835  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:47.507851  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:47.507972  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.508202  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:47.538497  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:47.538521  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:47.538535  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:47.538559  449185 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:47.538627  449185 start.go:364] duration metric: took 48.131µs to acquireMachinesLock for "ha-008703-m02"
	I1212 21:05:47.538652  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:47.538660  449185 fix.go:54] fixHost starting: m02
	I1212 21:05:47.538948  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:47.574023  449185 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 21:05:47.574053  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:47.577557  449185 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 21:05:47.577655  449185 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 21:05:47.980330  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:48.008294  449185 kic.go:430] container "ha-008703-m02" state is running.
	I1212 21:05:48.008939  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:48.047188  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:48.047422  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:48.047478  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:48.078749  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:48.079063  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:48.079074  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:48.079845  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44600->127.0.0.1:33207: read: connection reset by peer
	I1212 21:05:51.328699  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.328723  449185 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 21:05:51.328784  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.373011  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.373328  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.373339  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 21:05:51.672250  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.672411  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.697392  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.697707  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.697724  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:51.885149  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:51.885219  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:51.885252  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:51.885290  449185 provision.go:84] configureAuth start
	I1212 21:05:51.885368  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:51.907559  449185 provision.go:143] copyHostCerts
	I1212 21:05:51.907599  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907631  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:51.907638  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907718  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:51.907797  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907814  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:51.907820  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907846  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:51.907886  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907901  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:51.907905  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907929  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:51.907973  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 21:05:52.137179  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:52.137300  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:52.137386  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.156094  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:52.288849  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:52.288913  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:05:52.342195  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:52.342258  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:05:52.393562  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:52.393620  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:52.445696  449185 provision.go:87] duration metric: took 560.374153ms to configureAuth
	I1212 21:05:52.445764  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:52.446027  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:52.446170  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.478675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:52.478980  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:52.478993  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:53.000008  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:53.000110  449185 machine.go:97] duration metric: took 4.952677944s to provisionDockerMachine
	I1212 21:05:53.000138  449185 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 21:05:53.000177  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:53.000293  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:53.000358  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.020786  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.128335  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:53.131751  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:53.131783  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:53.131795  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:53.131855  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:53.131934  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:53.131947  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:53.132049  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:53.139844  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:53.158393  449185 start.go:296] duration metric: took 158.21332ms for postStartSetup
	I1212 21:05:53.158474  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:53.158534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.176037  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.281959  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:53.287302  449185 fix.go:56] duration metric: took 5.74863443s for fixHost
	I1212 21:05:53.287331  449185 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.748691916s
	I1212 21:05:53.287402  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:53.307739  449185 out.go:179] * Found network options:
	I1212 21:05:53.310522  449185 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 21:05:53.313363  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:05:53.313414  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:05:53.313489  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:53.313533  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.313574  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:53.313632  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.336547  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.336799  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.542870  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:53.567799  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:53.567925  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:53.589478  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:53.589553  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:53.589598  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:53.589671  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:53.609030  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:53.638599  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:53.638724  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:53.668742  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:53.694088  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:53.934693  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:54.164277  449185 docker.go:234] disabling docker service ...
	I1212 21:05:54.164417  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:54.185997  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:54.207462  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:54.437335  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:54.661473  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:54.679927  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:54.707742  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:54.707861  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.723319  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:54.723443  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.740396  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.751373  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.768858  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:54.780854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.795944  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.808854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.818935  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:54.833159  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:54.849406  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:55.082636  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:55.362814  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:55.362938  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:55.366812  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:55.366918  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:55.370570  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:55.399084  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:55.399168  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.428944  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.460814  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:55.463826  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:05:55.466808  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:55.495103  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:55.503442  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:55.518854  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:05:55.519096  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:55.519362  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:55.545294  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:05:55.545592  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 21:05:55.545608  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:55.545622  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:55.545735  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:55.545785  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:55.545796  449185 certs.go:257] generating profile certs ...
	I1212 21:05:55.545885  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:55.545952  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 21:05:55.546008  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:55.546022  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:55.546043  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:55.546059  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:55.546082  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:55.546098  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:55.546112  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:55.546126  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:55.546142  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:55.546197  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:55.546246  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:55.546262  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:55.546293  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:55.546320  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:55.546354  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:55.546415  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:55.546463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:55.546490  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:55.546515  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:55.546583  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:55.568767  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:55.668715  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:05:55.672576  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:05:55.680945  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:05:55.684500  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:05:55.693000  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:05:55.696718  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:05:55.704917  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:05:55.708459  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:05:55.717032  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:05:55.720547  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:05:55.728907  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:05:55.732537  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:05:55.740854  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:55.760026  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:55.778517  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:55.797624  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:55.817142  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:55.835385  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:55.853338  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:55.872093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:55.890019  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:55.908331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:55.926030  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:55.944002  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:05:55.956838  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:05:55.969593  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:05:55.982132  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:05:55.995578  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:05:56.013190  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:05:56.026969  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:05:56.040988  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:56.047942  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.056004  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:56.064163  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068273  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068362  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.109836  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:56.118260  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.126352  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:56.134010  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137848  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137914  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.179470  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:56.187587  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.195301  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:56.203258  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207359  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207467  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.248706  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:56.256310  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:56.260190  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:56.306385  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:56.347361  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:56.389865  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:56.430835  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:56.472973  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:56.521282  449185 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 21:05:56.521453  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:56.521498  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:56.521575  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:56.534831  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:56.534951  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:56.535047  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:56.543116  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:56.543223  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:05:56.551463  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:05:56.566227  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:56.579329  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:56.592969  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:56.596983  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:56.607297  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.744346  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.759793  449185 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:56.760120  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:56.766599  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:05:56.769234  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.908410  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.923082  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:05:56.923202  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:05:56.923464  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 21:06:06.924664  449185 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:06:10.340284  449185 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:06:20.254665  449185 node_ready.go:49] node "ha-008703-m02" is "Ready"
	I1212 21:06:20.254694  449185 node_ready.go:38] duration metric: took 23.33118731s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:06:20.254707  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:20.254768  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:20.278828  449185 api_server.go:72] duration metric: took 23.518673135s to wait for apiserver process to appear ...
	I1212 21:06:20.278854  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:20.278876  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.361760  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:06:20.361785  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:06:20.779312  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.809650  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:20.809728  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.279043  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.326274  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.326348  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.779606  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.811129  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.811210  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.279504  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.299466  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.299549  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.779116  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.797946  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.798028  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.279662  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.308514  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.308642  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.779220  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.800333  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.800429  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:24.278995  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:24.291485  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:24.307186  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:24.307278  449185 api_server.go:131] duration metric: took 4.028399738s to wait for apiserver health ...
	I1212 21:06:24.307306  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:24.326207  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:24.326317  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326341  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326383  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.326404  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.326425  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.326458  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.326482  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.326502  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.326524  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.326559  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326604  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326624  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.326647  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326684  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326711  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.326732  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.326752  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.326770  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.326797  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.326828  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.326851  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.326870  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.326900  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.326923  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.326944  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.326964  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.326987  449185 system_pods.go:74] duration metric: took 19.648646ms to wait for pod list to return data ...
	I1212 21:06:24.327025  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:24.345476  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:24.345542  449185 default_sa.go:55] duration metric: took 18.497613ms for default service account to be created ...
	I1212 21:06:24.345567  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:24.441449  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:24.441494  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441509  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441517  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.441529  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.441537  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.441542  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.441549  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.441553  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.441557  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.441564  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441576  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441580  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.441592  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441601  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441606  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.441612  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.441616  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.441620  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.441627  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.441631  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.441646  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.441650  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.441654  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.441665  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.441671  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.441675  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.441684  449185 system_pods.go:126] duration metric: took 96.098139ms to wait for k8s-apps to be running ...
	I1212 21:06:24.441697  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:24.441755  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:24.458749  449185 system_svc.go:56] duration metric: took 17.042535ms WaitForService to wait for kubelet
	I1212 21:06:24.458826  449185 kubeadm.go:587] duration metric: took 27.69867432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:24.458863  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:24.463250  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463295  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463308  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463313  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463317  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463322  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463325  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463330  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463334  449185 node_conditions.go:105] duration metric: took 4.443929ms to run NodePressure ...
	I1212 21:06:24.463360  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:24.463389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:24.467450  449185 out.go:203] 
	I1212 21:06:24.471714  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:24.471840  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.475478  449185 out.go:179] * Starting "ha-008703-m03" control-plane node in "ha-008703" cluster
	I1212 21:06:24.479357  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:24.482576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:24.485573  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:24.485605  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:24.485687  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:24.485718  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:24.485736  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:24.485861  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.512091  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:24.512112  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:24.512126  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:24.512153  449185 start.go:360] acquireMachinesLock for ha-008703-m03: {Name:mkc4792dc097e09b497b46fff7452c5b0b6f70aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:24.512210  449185 start.go:364] duration metric: took 41.255µs to acquireMachinesLock for "ha-008703-m03"
	I1212 21:06:24.512230  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:24.512237  449185 fix.go:54] fixHost starting: m03
	I1212 21:06:24.512562  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.547705  449185 fix.go:112] recreateIfNeeded on ha-008703-m03: state=Stopped err=<nil>
	W1212 21:06:24.547736  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:24.551016  449185 out.go:252] * Restarting existing docker container for "ha-008703-m03" ...
	I1212 21:06:24.551124  449185 cli_runner.go:164] Run: docker start ha-008703-m03
	I1212 21:06:24.918317  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.943282  449185 kic.go:430] container "ha-008703-m03" state is running.
	I1212 21:06:24.944655  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:24.976163  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.976462  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:24.976536  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:25.007740  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:25.008073  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:25.008082  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:25.008934  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45896->127.0.0.1:33212: read: connection reset by peer
	I1212 21:06:28.195900  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.195925  449185 ubuntu.go:182] provisioning hostname "ha-008703-m03"
	I1212 21:06:28.195992  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.238514  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.238834  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.238851  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m03 && echo "ha-008703-m03" | sudo tee /etc/hostname
	I1212 21:06:28.479384  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.479480  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.507106  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.507416  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.507437  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:28.751314  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:28.751390  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:28.751429  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:28.751469  449185 provision.go:84] configureAuth start
	I1212 21:06:28.751595  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:28.780423  449185 provision.go:143] copyHostCerts
	I1212 21:06:28.780473  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780506  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:28.780519  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780599  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:28.780687  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780712  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:28.780720  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780749  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:28.780795  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780816  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:28.780823  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780848  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:28.780902  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m03 san=[127.0.0.1 192.168.49.4 ha-008703-m03 localhost minikube]
	I1212 21:06:29.132570  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:29.132679  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:29.132752  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.161077  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:29.290001  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:29.290063  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:29.326015  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:29.326077  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:29.373017  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:29.373102  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:06:29.430671  449185 provision.go:87] duration metric: took 679.168963ms to configureAuth
	I1212 21:06:29.430700  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:29.430943  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:29.431050  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.464440  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:29.464756  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:29.464775  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:30.522791  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:30.522817  449185 machine.go:97] duration metric: took 5.546337341s to provisionDockerMachine
	I1212 21:06:30.522830  449185 start.go:293] postStartSetup for "ha-008703-m03" (driver="docker")
	I1212 21:06:30.522841  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:30.522923  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:30.522969  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.541196  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.648836  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:30.652559  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:30.652598  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:30.652624  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:30.652708  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:30.652823  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:30.652833  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:30.652939  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:30.661331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:30.687281  449185 start.go:296] duration metric: took 164.433925ms for postStartSetup
	I1212 21:06:30.687373  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:30.687421  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.713364  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.821971  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:30.827033  449185 fix.go:56] duration metric: took 6.314788872s for fixHost
	I1212 21:06:30.827061  449185 start.go:83] releasing machines lock for "ha-008703-m03", held for 6.314842198s
	I1212 21:06:30.827140  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:30.847749  449185 out.go:179] * Found network options:
	I1212 21:06:30.850465  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1212 21:06:30.853486  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853520  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853545  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853558  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:30.853630  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:30.853672  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.853950  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:30.854006  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.875211  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.901708  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:31.084053  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:31.089338  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:31.089442  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:31.098288  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:31.098362  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:31.098418  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:31.098504  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:31.115825  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:31.132457  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:31.132578  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:31.150352  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:31.166465  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:06:31.301826  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:06:31.519838  449185 docker.go:234] disabling docker service ...
	I1212 21:06:31.519963  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:06:31.552895  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:06:31.586883  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:06:31.921487  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:06:32.171189  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:06:32.196225  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:06:32.218996  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:06:32.219066  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.231170  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:06:32.231254  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.264701  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.278943  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.293177  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:06:32.313973  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.323884  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.333399  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.345640  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:06:32.354606  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:06:32.378038  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:32.601691  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:06:32.867254  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:06:32.867377  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:06:32.871734  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:06:32.871807  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:06:32.875400  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:06:32.900774  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:06:32.900910  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.930896  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.972077  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:06:32.974985  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:06:32.977916  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:06:32.980878  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:06:32.998829  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:06:33.008314  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:33.019604  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:06:33.019853  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:33.020130  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:06:33.050582  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:06:33.050909  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.4
	I1212 21:06:33.050924  449185 certs.go:195] generating shared ca certs ...
	I1212 21:06:33.050954  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:06:33.051090  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:06:33.051141  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:06:33.051152  449185 certs.go:257] generating profile certs ...
	I1212 21:06:33.051239  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:06:33.051314  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.77152b1c
	I1212 21:06:33.051365  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:06:33.051374  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:06:33.051387  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:06:33.051401  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:06:33.051418  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:06:33.051430  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:06:33.051446  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:06:33.051463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:06:33.051479  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:06:33.051535  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:06:33.051571  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:06:33.051584  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:06:33.051615  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:06:33.051643  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:06:33.051671  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:06:33.051721  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:33.051757  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.051774  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.051785  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.051851  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:06:33.071355  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:06:33.180711  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:06:33.184847  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:06:33.194292  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:06:33.198466  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:06:33.207132  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:06:33.210762  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:06:33.219366  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:06:33.222902  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:06:33.231254  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:06:33.235252  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:06:33.245320  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:06:33.249647  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:06:33.259234  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:06:33.282501  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:06:33.308249  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:06:33.330512  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:06:33.350745  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:06:33.371841  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:06:33.392489  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:06:33.415260  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:06:33.435093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:06:33.455125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:06:33.475775  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:06:33.503119  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:06:33.519902  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:06:33.541097  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:06:33.558546  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:06:33.580936  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:06:33.604112  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:06:33.628438  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:06:33.645138  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:06:33.653214  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.661760  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:06:33.672498  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677561  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677637  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.725658  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:06:33.734300  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.742147  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:06:33.750364  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754312  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754435  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.795883  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:06:33.803561  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.811944  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:06:33.819768  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823821  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823917  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.869341  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:06:33.877525  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:06:33.881524  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:06:33.923421  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:06:33.965151  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:06:34.007958  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:06:34.056315  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:06:34.099324  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:06:34.142509  449185 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1212 21:06:34.142710  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:06:34.142750  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:06:34.142821  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:06:34.155586  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:06:34.155655  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:06:34.155735  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:06:34.164504  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:06:34.164593  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:06:34.172960  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:06:34.187238  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:06:34.202155  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:06:34.217531  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:06:34.221916  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:34.232222  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.409764  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.425465  449185 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:06:34.426019  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:34.429018  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:06:34.431984  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.608481  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.623603  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:06:34.623719  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:06:34.623971  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627483  449185 node_ready.go:49] node "ha-008703-m03" is "Ready"
	I1212 21:06:34.627510  449185 node_ready.go:38] duration metric: took 3.502711ms for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627524  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:34.627583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.127774  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.627665  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.128468  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.628211  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.128314  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.627991  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.127766  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.627868  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.128698  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.128648  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.627740  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.128354  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.628245  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.130632  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.627827  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.128583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.627968  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.128136  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.628605  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.128568  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.627727  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.128033  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.627763  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.128250  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.127920  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.628389  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.127872  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.628485  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.127813  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.627737  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.128714  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.628186  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.128495  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.627734  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.128077  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.628172  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.643287  449185 api_server.go:72] duration metric: took 19.217761741s to wait for apiserver process to appear ...
	I1212 21:06:53.643310  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:53.643330  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:53.653231  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:53.654408  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:53.654429  449185 api_server.go:131] duration metric: took 11.111371ms to wait for apiserver health ...
	I1212 21:06:53.654438  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:53.664181  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:53.664268  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664292  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664326  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.664350  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.664399  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.664423  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.664447  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.664476  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.664511  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.664543  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.664562  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.664586  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.664617  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.664639  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.664655  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.664672  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.664692  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.664722  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.664747  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.664767  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.664786  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.664806  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.664833  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.664856  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.664876  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.664898  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.664934  449185 system_pods.go:74] duration metric: took 10.478512ms to wait for pod list to return data ...
	I1212 21:06:53.664963  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:53.672021  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:53.672087  449185 default_sa.go:55] duration metric: took 7.103458ms for default service account to be created ...
	I1212 21:06:53.672114  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:53.683734  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:53.683818  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683843  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683876  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.683898  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.683916  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.683935  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.683958  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.683985  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.684009  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.684028  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.684048  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.684069  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.684096  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.684121  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.684144  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.684165  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.684195  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.684216  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.684234  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.684254  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.684274  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.684305  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.684334  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.684356  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.684505  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.684532  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.684555  449185 system_pods.go:126] duration metric: took 12.421784ms to wait for k8s-apps to be running ...
	I1212 21:06:53.684581  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:53.684664  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:53.707726  449185 system_svc.go:56] duration metric: took 23.13631ms WaitForService to wait for kubelet
	I1212 21:06:53.707794  449185 kubeadm.go:587] duration metric: took 19.282272877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:53.707828  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:53.713066  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713138  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713167  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713189  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713224  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713251  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713272  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713294  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713315  449185 node_conditions.go:105] duration metric: took 5.4683ms to run NodePressure ...
	I1212 21:06:53.713355  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:53.713389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:53.716967  449185 out.go:203] 
	I1212 21:06:53.720156  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:53.720328  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.723670  449185 out.go:179] * Starting "ha-008703-m04" worker node in "ha-008703" cluster
	I1212 21:06:53.726637  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:53.729576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:53.732517  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:53.732614  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:53.732589  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:53.732947  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:53.732979  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:53.733130  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.769116  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:53.769147  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:53.769168  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:53.769196  449185 start.go:360] acquireMachinesLock for ha-008703-m04: {Name:mk62cc2a2cc2e6d3b3f47556aaddea9ef719055b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:53.769254  449185 start.go:364] duration metric: took 38.549µs to acquireMachinesLock for "ha-008703-m04"
	I1212 21:06:53.769277  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:53.769289  449185 fix.go:54] fixHost starting: m04
	I1212 21:06:53.769545  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:53.786769  449185 fix.go:112] recreateIfNeeded on ha-008703-m04: state=Stopped err=<nil>
	W1212 21:06:53.786801  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:53.789926  449185 out.go:252] * Restarting existing docker container for "ha-008703-m04" ...
	I1212 21:06:53.790089  449185 cli_runner.go:164] Run: docker start ha-008703-m04
	I1212 21:06:54.156965  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:54.178693  449185 kic.go:430] container "ha-008703-m04" state is running.
	I1212 21:06:54.179092  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:54.203905  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:54.204146  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:54.204209  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:54.236695  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:54.237065  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:54.237081  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:54.237686  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:06:57.432360  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.432405  449185 ubuntu.go:182] provisioning hostname "ha-008703-m04"
	I1212 21:06:57.432471  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.466545  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.466905  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.466917  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m04 && echo "ha-008703-m04" | sudo tee /etc/hostname
	I1212 21:06:57.695949  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.696057  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.725675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.725993  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.726015  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:57.922048  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:57.922076  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:57.922097  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:57.922108  449185 provision.go:84] configureAuth start
	I1212 21:06:57.922191  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:57.949300  449185 provision.go:143] copyHostCerts
	I1212 21:06:57.949346  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949379  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:57.949390  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949467  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:57.949557  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949579  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:57.949590  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949619  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:57.949669  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949692  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:57.949702  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949735  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:57.949797  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m04 san=[127.0.0.1 192.168.49.5 ha-008703-m04 localhost minikube]
	I1212 21:06:58.253055  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:58.253130  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:58.253185  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.272770  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:58.384265  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:58.384326  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:58.432775  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:58.432846  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:58.468705  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:58.468769  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:06:58.498893  449185 provision.go:87] duration metric: took 576.767506ms to configureAuth
	I1212 21:06:58.498961  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:58.499231  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:58.499373  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.531077  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:58.531395  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:58.531411  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:59.036280  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:59.036310  449185 machine.go:97] duration metric: took 4.83214688s to provisionDockerMachine
	I1212 21:06:59.036331  449185 start.go:293] postStartSetup for "ha-008703-m04" (driver="docker")
	I1212 21:06:59.036343  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:59.036466  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:59.036523  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.086256  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.217706  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:59.225272  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:59.225304  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:59.225326  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:59.225398  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:59.225489  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:59.225502  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:59.225626  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:59.239694  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:59.289259  449185 start.go:296] duration metric: took 252.894748ms for postStartSetup
	I1212 21:06:59.289353  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:59.289435  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.318501  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.433235  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:59.440975  449185 fix.go:56] duration metric: took 5.671680345s for fixHost
	I1212 21:06:59.441000  449185 start.go:83] releasing machines lock for "ha-008703-m04", held for 5.671734343s
	I1212 21:06:59.441074  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:59.473221  449185 out.go:179] * Found network options:
	I1212 21:06:59.477821  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1212 21:06:59.480861  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480899  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480912  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480936  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480956  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480968  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:59.481044  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:59.481089  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.481371  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:59.481425  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.521656  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.528821  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.865561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:59.874595  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:59.874667  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:59.887303  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:59.887378  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:59.887427  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:59.887500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:59.908986  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:59.940196  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:59.940301  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:59.959663  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:59.976282  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:07:00.307427  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:07:00.569417  449185 docker.go:234] disabling docker service ...
	I1212 21:07:00.569500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:07:00.607031  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:07:00.633272  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:07:00.844907  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:07:01.084528  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:07:01.108001  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:07:01.130446  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:07:01.130569  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.145280  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:07:01.145425  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.165912  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.178770  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.192394  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:07:01.203182  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.214233  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.224343  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.236075  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:07:01.246300  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:07:01.256331  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:01.516203  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:07:01.766997  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:07:01.767119  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:07:01.776270  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:07:01.776437  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:07:01.784745  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:07:01.824822  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:07:01.824977  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.889046  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.956065  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:07:01.959062  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:07:01.962079  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:07:01.964978  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1212 21:07:01.967779  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:07:01.996732  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:07:02.001678  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.020405  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:07:02.020654  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:02.020930  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:07:02.039611  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:07:02.039893  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.5
	I1212 21:07:02.039901  449185 certs.go:195] generating shared ca certs ...
	I1212 21:07:02.039915  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:07:02.040028  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:07:02.040067  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:07:02.040078  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:07:02.040092  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:07:02.040104  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:07:02.040116  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:07:02.040169  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:07:02.040202  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:07:02.040210  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:07:02.040237  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:07:02.040261  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:07:02.040288  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:07:02.040334  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:07:02.040380  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.040396  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.040407  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.040424  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:07:02.066397  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:07:02.105376  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:07:02.137944  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:07:02.170023  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:07:02.210932  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:07:02.238540  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:07:02.269874  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:07:02.281063  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.291218  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:07:02.301041  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308712  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308786  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.368311  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:07:02.378631  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.387217  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:07:02.398975  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403766  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403869  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.470421  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:07:02.480522  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.493373  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:07:02.510638  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516014  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516150  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.591218  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:07:02.600904  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:07:02.619811  449185 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:07:02.619887  449185 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1212 21:07:02.619990  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:07:02.620088  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:07:02.636422  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:07:02.636540  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 21:07:02.650400  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:07:02.684861  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:07:02.708803  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:07:02.713707  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.731184  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.010394  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.061651  449185 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 21:07:03.062018  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:03.067183  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:07:03.070801  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.406466  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.471431  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:07:03.471508  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:07:03.471736  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505163  449185 node_ready.go:49] node "ha-008703-m04" is "Ready"
	I1212 21:07:03.505194  449185 node_ready.go:38] duration metric: took 33.438197ms for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505209  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:07:03.505266  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:07:03.526122  449185 system_svc.go:56] duration metric: took 20.904535ms WaitForService to wait for kubelet
	I1212 21:07:03.526155  449185 kubeadm.go:587] duration metric: took 464.111537ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:07:03.526175  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:07:03.582671  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582703  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582714  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582719  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582723  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582727  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582731  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582735  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582741  449185 node_conditions.go:105] duration metric: took 56.560779ms to run NodePressure ...
	I1212 21:07:03.582752  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:07:03.582774  449185 start.go:256] writing updated cluster config ...
	I1212 21:07:03.583086  449185 ssh_runner.go:195] Run: rm -f paused
	I1212 21:07:03.601326  449185 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:03.602059  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:07:03.627964  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640449  449185 pod_ready.go:94] pod "coredns-66bc5c9577-8tvqx" is "Ready"
	I1212 21:07:03.640525  449185 pod_ready.go:86] duration metric: took 12.481008ms for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640551  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.647941  449185 pod_ready.go:94] pod "coredns-66bc5c9577-kls2t" is "Ready"
	I1212 21:07:03.648021  449185 pod_ready.go:86] duration metric: took 7.447403ms for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.734522  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742549  449185 pod_ready.go:94] pod "etcd-ha-008703" is "Ready"
	I1212 21:07:03.742645  449185 pod_ready.go:86] duration metric: took 8.036611ms for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742670  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751107  449185 pod_ready.go:94] pod "etcd-ha-008703-m02" is "Ready"
	I1212 21:07:03.751180  449185 pod_ready.go:86] duration metric: took 8.490203ms for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751203  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.802884  449185 request.go:683] "Waited before sending request" delay="51.579039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-008703-m03"
	I1212 21:07:04.003143  449185 request.go:683] "Waited before sending request" delay="191.298042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:04.008011  449185 pod_ready.go:94] pod "etcd-ha-008703-m03" is "Ready"
	I1212 21:07:04.008105  449185 pod_ready.go:86] duration metric: took 256.8794ms for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.203542  449185 request.go:683] "Waited before sending request" delay="195.301148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1212 21:07:04.208571  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.402858  449185 request.go:683] "Waited before sending request" delay="194.13984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703"
	I1212 21:07:04.603054  449185 request.go:683] "Waited before sending request" delay="196.30777ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:04.607366  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703" is "Ready"
	I1212 21:07:04.607392  449185 pod_ready.go:86] duration metric: took 398.743662ms for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.607403  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.802681  449185 request.go:683] "Waited before sending request" delay="195.203703ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m02"
	I1212 21:07:05.004599  449185 request.go:683] "Waited before sending request" delay="198.050663ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:05.009883  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m02" is "Ready"
	I1212 21:07:05.009916  449185 pod_ready.go:86] duration metric: took 402.505715ms for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.009927  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.203348  449185 request.go:683] "Waited before sending request" delay="193.318894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m03"
	I1212 21:07:05.402598  449185 request.go:683] "Waited before sending request" delay="195.266325ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:05.407026  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m03" is "Ready"
	I1212 21:07:05.407054  449185 pod_ready.go:86] duration metric: took 397.119016ms for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.603514  449185 request.go:683] "Waited before sending request" delay="196.332041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1212 21:07:05.609335  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.802598  449185 request.go:683] "Waited before sending request" delay="193.136821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703"
	I1212 21:07:06.002969  449185 request.go:683] "Waited before sending request" delay="196.400711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:06.009868  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703" is "Ready"
	I1212 21:07:06.009898  449185 pod_ready.go:86] duration metric: took 400.534916ms for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.009910  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.203284  449185 request.go:683] "Waited before sending request" delay="193.288724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m02"
	I1212 21:07:06.403087  449185 request.go:683] "Waited before sending request" delay="195.335069ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:06.406992  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m02" is "Ready"
	I1212 21:07:06.407024  449185 pod_ready.go:86] duration metric: took 397.103754ms for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.407035  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.603444  449185 request.go:683] "Waited before sending request" delay="196.318585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m03"
	I1212 21:07:06.803243  449185 request.go:683] "Waited before sending request" delay="196.311315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:06.811152  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m03" is "Ready"
	I1212 21:07:06.811182  449185 pod_ready.go:86] duration metric: took 404.13997ms for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.003659  449185 request.go:683] "Waited before sending request" delay="192.369133ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1212 21:07:07.008682  449185 pod_ready.go:83] waiting for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.203112  449185 request.go:683] "Waited before sending request" delay="194.317566ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26llr"
	I1212 21:07:07.403112  449185 request.go:683] "Waited before sending request" delay="196.188213ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m04"
	I1212 21:07:07.406710  449185 pod_ready.go:94] pod "kube-proxy-26llr" is "Ready"
	I1212 21:07:07.406741  449185 pod_ready.go:86] duration metric: took 398.024461ms for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.406752  449185 pod_ready.go:83] waiting for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.603217  449185 request.go:683] "Waited before sending request" delay="196.391784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5cjcj"
	I1212 21:07:07.802591  449185 request.go:683] "Waited before sending request" delay="195.268704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:07.806437  449185 pod_ready.go:94] pod "kube-proxy-5cjcj" is "Ready"
	I1212 21:07:07.806468  449185 pod_ready.go:86] duration metric: took 399.70889ms for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.806478  449185 pod_ready.go:83] waiting for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.003374  449185 request.go:683] "Waited before sending request" delay="196.807041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgx5j"
	I1212 21:07:08.203254  449185 request.go:683] "Waited before sending request" delay="193.281921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:08.206488  449185 pod_ready.go:94] pod "kube-proxy-tgx5j" is "Ready"
	I1212 21:07:08.206516  449185 pod_ready.go:86] duration metric: took 400.031584ms for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.206527  449185 pod_ready.go:83] waiting for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.402890  449185 request.go:683] "Waited before sending request" delay="196.283952ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v8lm4"
	I1212 21:07:08.602890  449185 request.go:683] "Waited before sending request" delay="190.306444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:08.606678  449185 pod_ready.go:94] pod "kube-proxy-v8lm4" is "Ready"
	I1212 21:07:08.606704  449185 pod_ready.go:86] duration metric: took 400.170499ms for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.803166  449185 request.go:683] "Waited before sending request" delay="196.329375ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1212 21:07:08.807939  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.006982  449185 request.go:683] "Waited before sending request" delay="198.916082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703"
	I1212 21:07:09.203284  449185 request.go:683] "Waited before sending request" delay="192.346692ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:09.206489  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703" is "Ready"
	I1212 21:07:09.206522  449185 pod_ready.go:86] duration metric: took 398.549635ms for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.206532  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.402973  449185 request.go:683] "Waited before sending request" delay="196.306934ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m02"
	I1212 21:07:09.603345  449185 request.go:683] "Waited before sending request" delay="192.346225ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:09.611536  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m02" is "Ready"
	I1212 21:07:09.611565  449185 pod_ready.go:86] duration metric: took 405.026929ms for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.611575  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.802963  449185 request.go:683] "Waited before sending request" delay="191.311533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m03"
	I1212 21:07:10.004827  449185 request.go:683] "Waited before sending request" delay="198.485333ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:10.012647  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m03" is "Ready"
	I1212 21:07:10.012677  449185 pod_ready.go:86] duration metric: took 401.094897ms for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:10.012691  449185 pod_ready.go:40] duration metric: took 6.411220695s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:10.085120  449185 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 21:07:10.090453  449185 out.go:179] * Done! kubectl is now configured to use "ha-008703" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.084643835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f025e76-4eca-4fb1-b55a-f8d9a43fa536 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087572223Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087672564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095689671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.0959013Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/passwd: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095933095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/group: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.096211382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.136290189Z" level=info msg="Created container 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.137414204Z" level=info msg="Starting container: 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145" id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.14248122Z" level=info msg="Started container" PID=1398 containerID=5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145 description=kube-system/storage-provisioner/storage-provisioner id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.077353049Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.084667544Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090321422Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090434276Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.101511448Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108846054Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108901554Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125800597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125957924Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.126043537Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133398738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133546145Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133624332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148814452Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148949928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5129752cc0a67       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   20 seconds ago       Running             storage-provisioner       2                   1b6b1faf503c8       storage-provisioner                 kube-system
	3f4c5923951e8       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   51 seconds ago       Running             busybox                   1                   9a656c52a260b       busybox-7b57f96db7-tczdt            default
	560dd3383ed66       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   51 seconds ago       Running             coredns                   1                   2f24e16e55927       coredns-66bc5c9577-8tvqx            kube-system
	7cef3eaf30308       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   51 seconds ago       Running             kindnet-cni               1                   021217a0cf931       kindnet-f7h24                       kube-system
	82dd101ece4d1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   51 seconds ago       Exited              storage-provisioner       1                   1b6b1faf503c8       storage-provisioner                 kube-system
	ad94d81034c43       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   51 seconds ago       Running             coredns                   1                   b75479f05351c       coredns-66bc5c9577-kls2t            kube-system
	2b11faa987b07       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   52 seconds ago       Running             kube-proxy                1                   66c81b9e2ff38       kube-proxy-tgx5j                    kube-system
	f08cf114510a2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   52 seconds ago       Running             kube-controller-manager   8                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	93fc3054083af       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Running             kube-apiserver            8                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	05ba874359221       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Running             kube-scheduler            2                   60ffed268d568       kube-scheduler-ha-008703            kube-system
	6e71e63256727       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            7                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	62a05b797d322       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   About a minute ago   Running             kube-vip                  1                   8e01afee41b4c       kube-vip-ha-008703                  kube-system
	03159ef735d03       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   7                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	e2542b7b3b0ad       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Running             etcd                      3                   e36007e1324cc       etcd-ha-008703                      kube-system
	
	
	==> coredns [560dd3383ed66f823e585260ec4823152488386a1e71bacea6cd9ca156adb2d8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52286 - 29430 "HINFO IN 4498128949033305171.1950480245235256825. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020264931s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ad94d81034c434b44c842f2117ddb8a51227d702a250a41dac1fac6dcf4f0e1c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36509 - 26980 "HINFO IN 2040533104487656964.3099826236879850204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003954694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-008703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-008703
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                6ff1a8bd-14d1-41ae-8cb8-9156f60dd654
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tczdt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-8tvqx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 coredns-66bc5c9577-kls2t             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     14m
	  kube-system                 etcd-ha-008703                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-f7h24                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-008703             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-008703    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-tgx5j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-008703             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-008703                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 49s                kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   NodeReady                14m                kubelet          Node ha-008703 status is now: NodeReady
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           9m55s              node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   Starting                 87s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 87s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  87s (x8 over 87s)  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    87s (x8 over 87s)  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     87s (x8 over 87s)  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           47s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           11s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	
	
	Name:               ha-008703-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-008703-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                ca808c21-ecc5-4ee7-9940-dffdef1da5b2
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hltw8                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-008703-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-blbfb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-008703-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-008703-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5cjcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-008703-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-008703-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 30s                kube-proxy       
	  Normal   Starting                 9m56s              kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   RegisteredNode           14m                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           9m55s              node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s (x8 over 83s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           48s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           47s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           11s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	
	
	Name:               ha-008703-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_54_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:54:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-008703-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                fa4c05be-b5d2-4bf0-a4b6-630b820e0e0a
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kc6ms                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-008703-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-6dvv4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-008703-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-008703-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-v8lm4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-008703-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-008703-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   CIDRAssignmentFailed     13m                cidrAllocator    Node ha-008703-m03 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           9m55s              node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           48s                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           47s                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   Starting                 46s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node ha-008703-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11s                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	
	
	Name:               ha-008703-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_55_24_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:55:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-008703-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                8a9366c1-4fff-44a3-a6b8-824607a69efc
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fwsws       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-26llr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x3 over 11m)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-008703-m04 status is now: CIDRAssignmentFailed
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x3 over 11m)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x3 over 11m)  kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           11m                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-008703-m04 status is now: NodeReady
	  Normal   RegisteredNode           9m55s              node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           48s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           47s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   Starting                 18s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  15s (x8 over 18s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    15s (x8 over 18s)  kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     15s (x8 over 18s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           11s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	
	
	==> dmesg <==
	[Dec12 17:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	[Dec12 20:52] overlayfs: idmapped layers are currently not supported
	[ +33.094252] overlayfs: idmapped layers are currently not supported
	[Dec12 20:53] overlayfs: idmapped layers are currently not supported
	[Dec12 20:55] overlayfs: idmapped layers are currently not supported
	[Dec12 20:56] overlayfs: idmapped layers are currently not supported
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	[Dec12 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.613273] overlayfs: idmapped layers are currently not supported
	[Dec12 21:06] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2] <==
	{"level":"warn","ts":"2025-12-12T21:06:33.065790Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:34.385240Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"warn","ts":"2025-12-12T21:06:37.066825Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:37.066883Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:37.766673Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:37.766690Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:41.068742Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:41.068796Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:42.766800Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:42.766892Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:45.070740Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:45.070818Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:47.767522Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:47.767544Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:49.072862Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:49.072916Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-12-12T21:06:52.518541Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"61bc3757651ee949","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-12T21:06:52.518591Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.518603Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.527855Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"61bc3757651ee949","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-12T21:06:52.527959Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.573914Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.574238Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"warn","ts":"2025-12-12T21:06:52.767676Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:52.767687Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 21:07:13 up  3:49,  0 user,  load average: 4.72, 1.88, 1.21
	Linux ha-008703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cef3eaf30308ab6e267a8568bc724dbe47546cc79d171e489dd52fca0f76a09] <==
	E1212 21:06:52.117526       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 21:06:52.117654       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 21:06:52.126134       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1212 21:06:53.716520       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 21:06:53.716636       1 metrics.go:72] Registering metrics
	I1212 21:06:53.716756       1 controller.go:711] "Syncing nftables rules"
	I1212 21:07:02.075035       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:07:02.075188       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:07:02.075398       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1212 21:07:02.075556       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:07:02.075607       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:07:02.075742       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1212 21:07:02.075878       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:07:02.075929       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:07:02.076051       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1212 21:07:02.076199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:07:02.076238       1 main.go:301] handling current node
	I1212 21:07:12.074942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:07:12.074981       1 main.go:301] handling current node
	I1212 21:07:12.074999       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:07:12.075015       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:07:12.075206       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:07:12.075219       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:07:12.075288       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:07:12.075298       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f] <==
	I1212 21:05:47.565735       1 server.go:150] Version: v1.34.2
	I1212 21:05:47.569343       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1212 21:05:49.281036       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1212 21:05:49.281145       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1212 21:05:49.281179       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1212 21:05:49.281210       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1212 21:05:49.281240       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1212 21:05:49.281267       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1212 21:05:49.281295       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1212 21:05:49.281322       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1212 21:05:49.281350       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1212 21:05:49.281379       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1212 21:05:49.281408       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1212 21:05:49.281437       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1212 21:05:49.315159       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:05:49.315278       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 21:05:49.320436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1212 21:05:49.332820       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:05:49.333128       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1212 21:05:49.333192       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1212 21:05:49.333470       1 instance.go:239] Using reconciler: lease
	W1212 21:05:49.335311       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1212 21:06:09.334486       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [93fc3054083af7a4f11519559898692bcb87a0a869c0e823fd96f50def2f02cd] <==
	I1212 21:06:20.368230       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 21:06:20.400872       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 21:06:20.412450       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 21:06:20.421494       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 21:06:20.413161       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 21:06:20.433292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 21:06:20.435830       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 21:06:20.439607       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 21:06:20.439971       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 21:06:20.446200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 21:06:20.446507       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 21:06:20.451816       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 21:06:20.466902       1 cache.go:39] Caches are synced for autoregister controller
	W1212 21:06:20.494872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I1212 21:06:20.498501       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 21:06:20.540491       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 21:06:20.544831       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1212 21:06:20.560023       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1212 21:06:20.915382       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 21:06:21.151536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 21:06:24.277503       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1212 21:06:26.132404       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 21:06:26.286031       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 21:06:26.435234       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	W1212 21:06:34.277202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-controller-manager [03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d] <==
	I1212 21:05:49.621747       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:05:50.751392       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1212 21:05:50.752418       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:05:50.756190       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 21:05:50.756306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 21:05:50.756352       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 21:05:50.756362       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1212 21:06:20.286877       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [f08cf114510a22705e6eddaabf72535ab357ca9404fe3342c1903bc51578da78] <==
	I1212 21:06:25.947009       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-008703"
	I1212 21:06:25.947060       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 21:06:25.946360       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 21:06:25.948255       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 21:06:25.948778       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 21:06:25.949912       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 21:06:25.956884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 21:06:25.956955       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 21:06:25.958970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 21:06:25.962893       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 21:06:25.966650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 21:06:25.966831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 21:06:25.966929       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 21:06:25.970777       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 21:06:25.977116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 21:06:25.978294       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 21:06:25.978569       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 21:06:25.979499       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 21:06:25.983384       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 21:06:25.991347       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 21:06:25.992778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 21:06:26.003403       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 21:06:26.005063       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:07:03.404820       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-88mnq\": the object has been modified; please apply your changes to the latest version and try again"
	I1212 21:07:03.412728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0e70dacf-1fbe-4ce7-930f-4790639720ae", APIVersion:"v1", ResourceVersion:"293", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-88mnq": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [2b11faa987b07a654a1ecb1110634491c33e925915fa00680eccd4a7874806fc] <==
	I1212 21:06:23.734028       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:06:24.050201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:06:24.251547       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:06:24.251592       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 21:06:24.251667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:06:24.378453       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:06:24.378516       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:06:24.392940       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:06:24.393314       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:06:24.393544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:06:24.394794       1 config.go:200] "Starting service config controller"
	I1212 21:06:24.394851       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:06:24.394892       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:06:24.394921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:06:24.394957       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:06:24.394983       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:06:24.395714       1 config.go:309] "Starting node config controller"
	I1212 21:06:24.398250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:06:24.398321       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:06:24.497136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:06:24.497308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:06:24.497322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b] <==
	I1212 21:06:20.248139       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 21:06:20.248183       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:06:20.270188       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:06:20.270295       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:06:20.276803       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 21:06:20.277005       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 21:06:20.368920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 21:06:20.369035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:06:20.369105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 21:06:20.369154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:06:20.369207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:06:20.369802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:06:20.369869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:06:20.369925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:06:20.369973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:06:20.370030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:06:20.370079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:06:20.370124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:06:20.371252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:06:20.371299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:06:20.371338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:06:20.438949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:06:20.444983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:06:20.445109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1212 21:06:20.470730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.676261     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-008703\" already exists" pod="kube-system/kube-controller-manager-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.676518     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.684227     764 apiserver.go:52] "Watching apiserver"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.715180     764 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-008703" podUID="13ad7cce-3343-4a6d-b066-b55715ef2727"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.733772     764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c574b029f9f86252bb40df91aa285cf" path="/var/lib/kubelet/pods/4c574b029f9f86252bb40df91aa285cf/volumes"
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.737750     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-008703\" already exists" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772520     764 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772704     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.789443     764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.857272     764 scope.go:117] "RemoveContainer" containerID="03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891614     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-xtables-lock\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891885     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-lib-modules\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892133     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-xtables-lock\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892297     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d57f23f-4461-4d86-b91f-e2628d8874ab-tmp\") pod \"storage-provisioner\" (UID: \"2d57f23f-4461-4d86-b91f-e2628d8874ab\") " pod="kube-system/storage-provisioner"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892406     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-cni-cfg\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.898926     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-lib-modules\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.897461     764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-008703" podStartSLOduration=0.897445384 podStartE2EDuration="897.445384ms" podCreationTimestamp="2025-12-12 21:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 21:06:20.850652974 +0000 UTC m=+34.291145116" watchObservedRunningTime="2025-12-12 21:06:20.897445384 +0000 UTC m=+34.337937510"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.972495     764 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.192647     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e WatchSource:0}: Error finding container b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e: Status 404 returned error can't find the container with id b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.402414     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226 WatchSource:0}: Error finding container 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226: Status 404 returned error can't find the container with id 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.434279     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced WatchSource:0}: Error finding container 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced: Status 404 returned error can't find the container with id 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.570067     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967 WatchSource:0}: Error finding container 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967: Status 404 returned error can't find the container with id 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967
	Dec 12 21:06:46 ha-008703 kubelet[764]: E1212 21:06:46.699197     764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50"
	Dec 12 21:06:46 ha-008703 kubelet[764]: I1212 21:06:46.699251     764 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist"
	Dec 12 21:06:53 ha-008703 kubelet[764]: I1212 21:06:53.074350     764 scope.go:117] "RemoveContainer" containerID="82dd101ece4d11a82b5e84808cb05db3a78e943db22ae1196fbeeda7f49c4b53"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703
helpers_test.go:270: (dbg) Run:  kubectl --context ha-008703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (95.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.116808254s)
ha_test.go:415: expected profile "ha-008703" in json of 'profile list' to have "Degraded" status but have "HAppy" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008703\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-008703\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesR
oot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-008703\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name
\":\"m02\",\"IP\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-dev
ice-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\"
:false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449316,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:05:39.880681825Z",
	            "FinishedAt": "2025-12-12T21:05:38.645326548Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56820d5d7e78ec2f02da47e339541c9ef651db5d532d64770a21ce2bbb5446a4",
	            "SandboxKey": "/var/run/docker/netns/56820d5d7e78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:e7:89:49:21:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "3c6a3818203b2804ed1a97d15e01e57b58ac1b4d017d616dc02dd9125b0a0f3c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:253: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 logs -n 25: (2.012110904s)
helpers_test.go:261: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt                                                            │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt                                                │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node start m02 --alsologtostderr -v 5                                                                                     │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:57 UTC │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │ 12 Dec 25 20:57 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5                                                                                  │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ node    │ ha-008703 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:05 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:05:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:05:39.605178  449185 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:05:39.605402  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605430  449185 out.go:374] Setting ErrFile to fd 2...
	I1212 21:05:39.605450  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605864  449185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:05:39.606369  449185 out.go:368] Setting JSON to false
	I1212 21:05:39.607946  449185 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13692,"bootTime":1765559848,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:05:39.608060  449185 start.go:143] virtualization:  
	I1212 21:05:39.611335  449185 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:05:39.615242  449185 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:05:39.615314  449185 notify.go:221] Checking for updates...
	I1212 21:05:39.621077  449185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:05:39.623949  449185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:39.626804  449185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:05:39.629715  449185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:05:39.632603  449185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:05:39.635954  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:39.636566  449185 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:05:39.669276  449185 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:05:39.669398  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.732289  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.722148611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.732454  449185 docker.go:319] overlay module found
	I1212 21:05:39.735677  449185 out.go:179] * Using the docker driver based on existing profile
	I1212 21:05:39.738449  449185 start.go:309] selected driver: docker
	I1212 21:05:39.738468  449185 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.738617  449185 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:05:39.738715  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.793928  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.784653162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.794497  449185 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:05:39.794535  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:39.794590  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:39.794655  449185 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.797771  449185 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 21:05:39.800532  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:39.803460  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:39.806386  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:39.806435  449185 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 21:05:39.806449  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:39.806468  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:39.806557  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:39.806568  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:39.806736  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:39.826241  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:39.826266  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:39.826283  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:39.826317  449185 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:39.826376  449185 start.go:364] duration metric: took 38.285µs to acquireMachinesLock for "ha-008703"
	I1212 21:05:39.826401  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:39.826407  449185 fix.go:54] fixHost starting: 
	I1212 21:05:39.826688  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:39.844490  449185 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 21:05:39.844521  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:39.847711  449185 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 21:05:39.847788  449185 cli_runner.go:164] Run: docker start ha-008703
	I1212 21:05:40.139310  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:40.163240  449185 kic.go:430] container "ha-008703" state is running.
	I1212 21:05:40.163662  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:40.191201  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:40.191459  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:40.191534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:40.219354  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:40.219684  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:40.219693  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:40.220585  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:05:43.371942  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.371968  449185 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 21:05:43.372054  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.389586  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.389913  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.389930  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 21:05:43.553625  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.553711  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.571751  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.572079  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.572102  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:43.724831  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:43.724856  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:43.724884  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:43.724903  449185 provision.go:84] configureAuth start
	I1212 21:05:43.724977  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:43.743377  449185 provision.go:143] copyHostCerts
	I1212 21:05:43.743421  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743463  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:43.743471  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743550  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:43.743646  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743662  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:43.743667  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743692  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:43.743751  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743767  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:43.743771  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743797  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:43.743859  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 21:05:43.832472  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:43.832541  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:43.832590  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.850299  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:43.956285  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:43.956420  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:43.974303  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:43.974381  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 21:05:43.992649  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:43.992714  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:05:44.013810  449185 provision.go:87] duration metric: took 288.892734ms to configureAuth
	I1212 21:05:44.013838  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:44.014088  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:44.014212  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.036649  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:44.037017  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:44.037041  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:44.386038  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:44.386060  449185 machine.go:97] duration metric: took 4.194590859s to provisionDockerMachine
	I1212 21:05:44.386072  449185 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 21:05:44.386084  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:44.386193  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:44.386264  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.403386  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.508670  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:44.512195  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:44.512221  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:44.512236  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:44.512291  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:44.512398  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:44.512408  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:44.512511  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:44.520678  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:44.539590  449185 start.go:296] duration metric: took 153.501859ms for postStartSetup
	I1212 21:05:44.539670  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:44.539734  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.557736  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.661664  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:44.666383  449185 fix.go:56] duration metric: took 4.839968923s for fixHost
	I1212 21:05:44.666409  449185 start.go:83] releasing machines lock for "ha-008703", held for 4.840020362s
	I1212 21:05:44.666477  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:44.684762  449185 ssh_runner.go:195] Run: cat /version.json
	I1212 21:05:44.684817  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.685079  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:44.685134  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.708523  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.712753  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.904198  449185 ssh_runner.go:195] Run: systemctl --version
	I1212 21:05:44.910603  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:44.946561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:44.951022  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:44.951140  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:44.959060  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:44.959085  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:44.959118  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:44.959164  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:44.974739  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:44.987642  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:44.987758  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:45.005197  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:45.023356  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:45.187771  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:45.360312  449185 docker.go:234] disabling docker service ...
	I1212 21:05:45.360416  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:45.382556  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:45.397072  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:45.515232  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:45.630674  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:45.644319  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:45.659761  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:45.659839  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.669217  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:45.669329  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.678932  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.691100  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.701211  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:45.710201  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.720671  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.729634  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.739187  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:45.747460  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:45.755441  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:45.880049  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:46.064833  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:46.064907  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:46.068969  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:46.069037  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:46.072837  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:46.098607  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:46.098708  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.128236  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.158573  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:46.161391  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:46.178132  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:46.181932  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.192021  449185 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:05:46.192177  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:46.192251  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.227916  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.227942  449185 crio.go:433] Images already preloaded, skipping extraction
	I1212 21:05:46.227998  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.253605  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.253629  449185 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:05:46.253638  449185 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 21:05:46.253742  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:46.253823  449185 ssh_runner.go:195] Run: crio config
	I1212 21:05:46.327816  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:46.327839  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:46.327863  449185 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:05:46.327893  449185 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:05:46.328051  449185 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:05:46.328077  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:46.328142  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:46.341034  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:46.341215  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:46.341284  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:46.349457  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:46.349531  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 21:05:46.357340  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 21:05:46.371153  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:46.384332  449185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 21:05:46.397565  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:46.411895  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:46.415692  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.426113  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:46.540637  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:46.557178  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 21:05:46.557202  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:46.557219  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:46.557365  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:46.557420  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:46.557434  449185 certs.go:257] generating profile certs ...
	I1212 21:05:46.557525  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:46.557600  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 21:05:46.557649  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:46.557662  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:46.557674  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:46.557688  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:46.557703  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:46.557714  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:46.557731  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:46.557752  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:46.557770  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:46.557824  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:46.557861  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:46.557873  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:46.557901  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:46.557930  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:46.557955  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:46.558003  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:46.558037  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.558052  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.558066  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.558628  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:46.581904  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:46.602655  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:46.623772  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:46.644667  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:46.670849  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:46.690125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:46.719167  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:46.743203  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:46.764296  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:46.788880  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:46.807678  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:05:46.822196  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:46.829401  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.838655  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:46.847305  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851571  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851686  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.894892  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:46.903217  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.911071  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:46.919222  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923110  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923186  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.964916  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:46.972957  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.980730  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:46.989130  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993540  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993610  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:47.036478  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:47.044309  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:47.048593  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:47.091048  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:47.132635  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:47.184472  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:47.233316  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:47.289483  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:47.363953  449185 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:47.364111  449185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:05:47.364177  449185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:05:47.424432  449185 cri.go:89] found id: "05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b"
	I1212 21:05:47.424457  449185 cri.go:89] found id: "6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f"
	I1212 21:05:47.424463  449185 cri.go:89] found id: "62a05b797d32258dc4368afc3978a5b3f463b4eafed6049189130af79138e299"
	I1212 21:05:47.424466  449185 cri.go:89] found id: "03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	I1212 21:05:47.424469  449185 cri.go:89] found id: "e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2"
	I1212 21:05:47.424473  449185 cri.go:89] found id: ""
	I1212 21:05:47.424525  449185 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 21:05:47.441549  449185 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:05:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:05:47.441640  449185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:05:47.453706  449185 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:05:47.453729  449185 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:05:47.453787  449185 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:05:47.466638  449185 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:47.467064  449185 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.467171  449185 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 21:05:47.467570  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.468100  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:05:47.468627  449185 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:05:47.468649  449185 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:05:47.468655  449185 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:05:47.468661  449185 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:05:47.468665  449185 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:05:47.468983  449185 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:05:47.469097  449185 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 21:05:47.477581  449185 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 21:05:47.477605  449185 kubeadm.go:602] duration metric: took 23.869575ms to restartPrimaryControlPlane
	I1212 21:05:47.477614  449185 kubeadm.go:403] duration metric: took 113.6735ms to StartCluster
	I1212 21:05:47.477631  449185 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.477689  449185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.478278  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.478485  449185 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:47.478512  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:05:47.478526  449185 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:05:47.479081  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.484597  449185 out.go:179] * Enabled addons: 
	I1212 21:05:47.487542  449185 addons.go:530] duration metric: took 9.010305ms for enable addons: enabled=[]
	I1212 21:05:47.487605  449185 start.go:247] waiting for cluster config update ...
	I1212 21:05:47.487614  449185 start.go:256] writing updated cluster config ...
	I1212 21:05:47.491098  449185 out.go:203] 
	I1212 21:05:47.494772  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.494914  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.498660  449185 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 21:05:47.501545  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:47.504535  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:47.507691  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:47.507726  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:47.507835  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:47.507851  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:47.507972  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.508202  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:47.538497  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:47.538521  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:47.538535  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:47.538559  449185 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:47.538627  449185 start.go:364] duration metric: took 48.131µs to acquireMachinesLock for "ha-008703-m02"
	I1212 21:05:47.538652  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:47.538660  449185 fix.go:54] fixHost starting: m02
	I1212 21:05:47.538948  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:47.574023  449185 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 21:05:47.574053  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:47.577557  449185 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 21:05:47.577655  449185 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 21:05:47.980330  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:48.008294  449185 kic.go:430] container "ha-008703-m02" state is running.
	I1212 21:05:48.008939  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:48.047188  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:48.047422  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:48.047478  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:48.078749  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:48.079063  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:48.079074  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:48.079845  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44600->127.0.0.1:33207: read: connection reset by peer
	I1212 21:05:51.328699  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.328723  449185 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 21:05:51.328784  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.373011  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.373328  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.373339  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 21:05:51.672250  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.672411  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.697392  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.697707  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.697724  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:51.885149  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:51.885219  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:51.885252  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:51.885290  449185 provision.go:84] configureAuth start
	I1212 21:05:51.885368  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:51.907559  449185 provision.go:143] copyHostCerts
	I1212 21:05:51.907599  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907631  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:51.907638  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907718  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:51.907797  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907814  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:51.907820  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907846  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:51.907886  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907901  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:51.907905  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907929  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:51.907973  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 21:05:52.137179  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:52.137300  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:52.137386  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.156094  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:52.288849  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:52.288913  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:05:52.342195  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:52.342258  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:05:52.393562  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:52.393620  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:52.445696  449185 provision.go:87] duration metric: took 560.374153ms to configureAuth
	I1212 21:05:52.445764  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:52.446027  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:52.446170  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.478675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:52.478980  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:52.478993  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:53.000008  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:53.000110  449185 machine.go:97] duration metric: took 4.952677944s to provisionDockerMachine
	I1212 21:05:53.000138  449185 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 21:05:53.000177  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:53.000293  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:53.000358  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.020786  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.128335  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:53.131751  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:53.131783  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:53.131795  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:53.131855  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:53.131934  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:53.131947  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:53.132049  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:53.139844  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:53.158393  449185 start.go:296] duration metric: took 158.21332ms for postStartSetup
	I1212 21:05:53.158474  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:53.158534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.176037  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.281959  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:53.287302  449185 fix.go:56] duration metric: took 5.74863443s for fixHost
	I1212 21:05:53.287331  449185 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.748691916s
	I1212 21:05:53.287402  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:53.307739  449185 out.go:179] * Found network options:
	I1212 21:05:53.310522  449185 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 21:05:53.313363  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:05:53.313414  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:05:53.313489  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:53.313533  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.313574  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:53.313632  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.336547  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.336799  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.542870  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:53.567799  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:53.567925  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:53.589478  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:53.589553  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:53.589598  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:53.589671  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:53.609030  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:53.638599  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:53.638724  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:53.668742  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:53.694088  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:53.934693  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:54.164277  449185 docker.go:234] disabling docker service ...
	I1212 21:05:54.164417  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:54.185997  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:54.207462  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:54.437335  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:54.661473  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:54.679927  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:54.707742  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:54.707861  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.723319  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:54.723443  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.740396  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.751373  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.768858  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:54.780854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.795944  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.808854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.818935  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:54.833159  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:54.849406  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:55.082636  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:55.362814  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:55.362938  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:55.366812  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:55.366918  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:55.370570  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:55.399084  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:55.399168  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.428944  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.460814  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:55.463826  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:05:55.466808  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:55.495103  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:55.503442  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:55.518854  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:05:55.519096  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:55.519362  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:55.545294  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:05:55.545592  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 21:05:55.545608  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:55.545622  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:55.545735  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:55.545785  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:55.545796  449185 certs.go:257] generating profile certs ...
	I1212 21:05:55.545885  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:55.545952  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 21:05:55.546008  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:55.546022  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:55.546043  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:55.546059  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:55.546082  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:55.546098  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:55.546112  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:55.546126  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:55.546142  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:55.546197  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:55.546246  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:55.546262  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:55.546293  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:55.546320  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:55.546354  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:55.546415  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:55.546463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:55.546490  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:55.546515  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:55.546583  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:55.568767  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:55.668715  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:05:55.672576  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:05:55.680945  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:05:55.684500  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:05:55.693000  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:05:55.696718  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:05:55.704917  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:05:55.708459  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:05:55.717032  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:05:55.720547  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:05:55.728907  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:05:55.732537  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:05:55.740854  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:55.760026  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:55.778517  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:55.797624  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:55.817142  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:55.835385  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:55.853338  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:55.872093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:55.890019  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:55.908331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:55.926030  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:55.944002  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:05:55.956838  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:05:55.969593  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:05:55.982132  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:05:55.995578  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:05:56.013190  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:05:56.026969  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:05:56.040988  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:56.047942  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.056004  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:56.064163  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068273  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068362  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.109836  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:56.118260  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.126352  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:56.134010  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137848  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137914  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.179470  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:56.187587  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.195301  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:56.203258  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207359  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207467  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.248706  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:56.256310  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:56.260190  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:56.306385  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:56.347361  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:56.389865  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:56.430835  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:56.472973  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:56.521282  449185 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 21:05:56.521453  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:56.521498  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:56.521575  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:56.534831  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:56.534951  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:56.535047  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:56.543116  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:56.543223  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:05:56.551463  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:05:56.566227  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:56.579329  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:56.592969  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:56.596983  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:56.607297  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.744346  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.759793  449185 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:56.760120  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:56.766599  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:05:56.769234  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.908410  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.923082  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:05:56.923202  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:05:56.923464  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 21:06:06.924664  449185 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:06:10.340284  449185 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:06:20.254665  449185 node_ready.go:49] node "ha-008703-m02" is "Ready"
	I1212 21:06:20.254694  449185 node_ready.go:38] duration metric: took 23.33118731s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:06:20.254707  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:20.254768  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:20.278828  449185 api_server.go:72] duration metric: took 23.518673135s to wait for apiserver process to appear ...
	I1212 21:06:20.278854  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:20.278876  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.361760  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:06:20.361785  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:06:20.779312  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.809650  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:20.809728  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.279043  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.326274  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.326348  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.779606  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.811129  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.811210  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.279504  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.299466  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.299549  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.779116  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.797946  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.798028  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.279662  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.308514  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.308642  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.779220  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.800333  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.800429  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:24.278995  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:24.291485  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:24.307186  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:24.307278  449185 api_server.go:131] duration metric: took 4.028399738s to wait for apiserver health ...
	I1212 21:06:24.307306  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:24.326207  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:24.326317  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326341  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326383  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.326404  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.326425  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.326458  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.326482  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.326502  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.326524  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.326559  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326604  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326624  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.326647  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326684  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326711  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.326732  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.326752  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.326770  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.326797  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.326828  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.326851  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.326870  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.326900  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.326923  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.326944  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.326964  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.326987  449185 system_pods.go:74] duration metric: took 19.648646ms to wait for pod list to return data ...
	I1212 21:06:24.327025  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:24.345476  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:24.345542  449185 default_sa.go:55] duration metric: took 18.497613ms for default service account to be created ...
	I1212 21:06:24.345567  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:24.441449  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:24.441494  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441509  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441517  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.441529  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.441537  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.441542  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.441549  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.441553  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.441557  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.441564  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441576  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441580  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.441592  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441601  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441606  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.441612  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.441616  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.441620  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.441627  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.441631  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.441646  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.441650  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.441654  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.441665  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.441671  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.441675  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.441684  449185 system_pods.go:126] duration metric: took 96.098139ms to wait for k8s-apps to be running ...
	I1212 21:06:24.441697  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:24.441755  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:24.458749  449185 system_svc.go:56] duration metric: took 17.042535ms WaitForService to wait for kubelet
	I1212 21:06:24.458826  449185 kubeadm.go:587] duration metric: took 27.69867432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:24.458863  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:24.463250  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463295  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463308  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463313  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463317  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463322  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463325  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463330  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463334  449185 node_conditions.go:105] duration metric: took 4.443929ms to run NodePressure ...
	I1212 21:06:24.463360  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:24.463389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:24.467450  449185 out.go:203] 
	I1212 21:06:24.471714  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:24.471840  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.475478  449185 out.go:179] * Starting "ha-008703-m03" control-plane node in "ha-008703" cluster
	I1212 21:06:24.479357  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:24.482576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:24.485573  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:24.485605  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:24.485687  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:24.485718  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:24.485736  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:24.485861  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.512091  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:24.512112  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:24.512126  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:24.512153  449185 start.go:360] acquireMachinesLock for ha-008703-m03: {Name:mkc4792dc097e09b497b46fff7452c5b0b6f70aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:24.512210  449185 start.go:364] duration metric: took 41.255µs to acquireMachinesLock for "ha-008703-m03"
	I1212 21:06:24.512230  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:24.512237  449185 fix.go:54] fixHost starting: m03
	I1212 21:06:24.512562  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.547705  449185 fix.go:112] recreateIfNeeded on ha-008703-m03: state=Stopped err=<nil>
	W1212 21:06:24.547736  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:24.551016  449185 out.go:252] * Restarting existing docker container for "ha-008703-m03" ...
	I1212 21:06:24.551124  449185 cli_runner.go:164] Run: docker start ha-008703-m03
	I1212 21:06:24.918317  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.943282  449185 kic.go:430] container "ha-008703-m03" state is running.
	I1212 21:06:24.944655  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:24.976163  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.976462  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:24.976536  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:25.007740  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:25.008073  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:25.008082  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:25.008934  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45896->127.0.0.1:33212: read: connection reset by peer
	I1212 21:06:28.195900  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.195925  449185 ubuntu.go:182] provisioning hostname "ha-008703-m03"
	I1212 21:06:28.195992  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.238514  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.238834  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.238851  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m03 && echo "ha-008703-m03" | sudo tee /etc/hostname
	I1212 21:06:28.479384  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.479480  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.507106  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.507416  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.507437  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:28.751314  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:28.751390  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:28.751429  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:28.751469  449185 provision.go:84] configureAuth start
	I1212 21:06:28.751595  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:28.780423  449185 provision.go:143] copyHostCerts
	I1212 21:06:28.780473  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780506  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:28.780519  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780599  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:28.780687  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780712  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:28.780720  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780749  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:28.780795  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780816  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:28.780823  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780848  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:28.780902  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m03 san=[127.0.0.1 192.168.49.4 ha-008703-m03 localhost minikube]
	I1212 21:06:29.132570  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:29.132679  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:29.132752  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.161077  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:29.290001  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:29.290063  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:29.326015  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:29.326077  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:29.373017  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:29.373102  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:06:29.430671  449185 provision.go:87] duration metric: took 679.168963ms to configureAuth
	I1212 21:06:29.430700  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:29.430943  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:29.431050  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.464440  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:29.464756  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:29.464775  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:30.522791  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:30.522817  449185 machine.go:97] duration metric: took 5.546337341s to provisionDockerMachine
	I1212 21:06:30.522830  449185 start.go:293] postStartSetup for "ha-008703-m03" (driver="docker")
	I1212 21:06:30.522841  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:30.522923  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:30.522969  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.541196  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.648836  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:30.652559  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:30.652598  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:30.652624  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:30.652708  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:30.652823  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:30.652833  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:30.652939  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:30.661331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:30.687281  449185 start.go:296] duration metric: took 164.433925ms for postStartSetup
	I1212 21:06:30.687373  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:30.687421  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.713364  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.821971  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:30.827033  449185 fix.go:56] duration metric: took 6.314788872s for fixHost
	I1212 21:06:30.827061  449185 start.go:83] releasing machines lock for "ha-008703-m03", held for 6.314842198s
	I1212 21:06:30.827140  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:30.847749  449185 out.go:179] * Found network options:
	I1212 21:06:30.850465  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1212 21:06:30.853486  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853520  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853545  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853558  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:30.853630  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:30.853672  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.853950  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:30.854006  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.875211  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.901708  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:31.084053  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:31.089338  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:31.089442  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:31.098288  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:31.098362  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:31.098418  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:31.098504  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:31.115825  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:31.132457  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:31.132578  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:31.150352  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:31.166465  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:06:31.301826  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:06:31.519838  449185 docker.go:234] disabling docker service ...
	I1212 21:06:31.519963  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:06:31.552895  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:06:31.586883  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:06:31.921487  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:06:32.171189  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:06:32.196225  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:06:32.218996  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:06:32.219066  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.231170  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:06:32.231254  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.264701  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.278943  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.293177  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:06:32.313973  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.323884  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.333399  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.345640  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:06:32.354606  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:06:32.378038  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:32.601691  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:06:32.867254  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:06:32.867377  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:06:32.871734  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:06:32.871807  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:06:32.875400  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:06:32.900774  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:06:32.900910  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.930896  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.972077  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:06:32.974985  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:06:32.977916  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:06:32.980878  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:06:32.998829  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:06:33.008314  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:33.019604  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:06:33.019853  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:33.020130  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:06:33.050582  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:06:33.050909  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.4
	I1212 21:06:33.050924  449185 certs.go:195] generating shared ca certs ...
	I1212 21:06:33.050954  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:06:33.051090  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:06:33.051141  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:06:33.051152  449185 certs.go:257] generating profile certs ...
	I1212 21:06:33.051239  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:06:33.051314  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.77152b1c
	I1212 21:06:33.051365  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:06:33.051374  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:06:33.051387  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:06:33.051401  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:06:33.051418  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:06:33.051430  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:06:33.051446  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:06:33.051463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:06:33.051479  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:06:33.051535  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:06:33.051571  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:06:33.051584  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:06:33.051615  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:06:33.051643  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:06:33.051671  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:06:33.051721  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:33.051757  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.051774  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.051785  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.051851  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:06:33.071355  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:06:33.180711  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:06:33.184847  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:06:33.194292  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:06:33.198466  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:06:33.207132  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:06:33.210762  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:06:33.219366  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:06:33.222902  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:06:33.231254  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:06:33.235252  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:06:33.245320  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:06:33.249647  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:06:33.259234  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:06:33.282501  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:06:33.308249  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:06:33.330512  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:06:33.350745  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:06:33.371841  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:06:33.392489  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:06:33.415260  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:06:33.435093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:06:33.455125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:06:33.475775  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:06:33.503119  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:06:33.519902  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:06:33.541097  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:06:33.558546  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:06:33.580936  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:06:33.604112  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:06:33.628438  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:06:33.645138  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:06:33.653214  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.661760  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:06:33.672498  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677561  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677637  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.725658  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:06:33.734300  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.742147  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:06:33.750364  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754312  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754435  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.795883  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:06:33.803561  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.811944  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:06:33.819768  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823821  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823917  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.869341  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:06:33.877525  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:06:33.881524  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:06:33.923421  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:06:33.965151  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:06:34.007958  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:06:34.056315  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:06:34.099324  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:06:34.142509  449185 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1212 21:06:34.142710  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:06:34.142750  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:06:34.142821  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:06:34.155586  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:06:34.155655  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:06:34.155735  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:06:34.164504  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:06:34.164593  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:06:34.172960  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:06:34.187238  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:06:34.202155  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:06:34.217531  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:06:34.221916  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:34.232222  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.409764  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.425465  449185 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:06:34.426019  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:34.429018  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:06:34.431984  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.608481  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.623603  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:06:34.623719  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:06:34.623971  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627483  449185 node_ready.go:49] node "ha-008703-m03" is "Ready"
	I1212 21:06:34.627510  449185 node_ready.go:38] duration metric: took 3.502711ms for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627524  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:34.627583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.127774  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.627665  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.128468  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.628211  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.128314  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.627991  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.127766  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.627868  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.128698  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.128648  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.627740  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.128354  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.628245  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.130632  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.627827  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.128583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.627968  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.128136  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.628605  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.128568  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.627727  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.128033  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.627763  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.128250  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.127920  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.628389  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.127872  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.628485  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.127813  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.627737  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.128714  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.628186  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.128495  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.627734  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.128077  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.628172  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.643287  449185 api_server.go:72] duration metric: took 19.217761741s to wait for apiserver process to appear ...
	I1212 21:06:53.643310  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:53.643330  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:53.653231  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:53.654408  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:53.654429  449185 api_server.go:131] duration metric: took 11.111371ms to wait for apiserver health ...
	I1212 21:06:53.654438  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:53.664181  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:53.664268  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664292  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664326  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.664350  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.664399  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.664423  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.664447  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.664476  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.664511  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.664543  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.664562  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.664586  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.664617  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.664639  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.664655  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.664672  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.664692  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.664722  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.664747  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.664767  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.664786  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.664806  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.664833  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.664856  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.664876  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.664898  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.664934  449185 system_pods.go:74] duration metric: took 10.478512ms to wait for pod list to return data ...
	I1212 21:06:53.664963  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:53.672021  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:53.672087  449185 default_sa.go:55] duration metric: took 7.103458ms for default service account to be created ...
	I1212 21:06:53.672114  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:53.683734  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:53.683818  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683843  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683876  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.683898  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.683916  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.683935  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.683958  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.683985  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.684009  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.684028  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.684048  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.684069  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.684096  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.684121  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.684144  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.684165  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.684195  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.684216  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.684234  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.684254  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.684274  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.684305  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.684334  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.684356  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.684505  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.684532  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.684555  449185 system_pods.go:126] duration metric: took 12.421784ms to wait for k8s-apps to be running ...
	I1212 21:06:53.684581  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:53.684664  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:53.707726  449185 system_svc.go:56] duration metric: took 23.13631ms WaitForService to wait for kubelet
	I1212 21:06:53.707794  449185 kubeadm.go:587] duration metric: took 19.282272877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:53.707828  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:53.713066  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713138  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713167  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713189  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713224  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713251  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713272  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713294  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713315  449185 node_conditions.go:105] duration metric: took 5.4683ms to run NodePressure ...
	I1212 21:06:53.713355  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:53.713389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:53.716967  449185 out.go:203] 
	I1212 21:06:53.720156  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:53.720328  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.723670  449185 out.go:179] * Starting "ha-008703-m04" worker node in "ha-008703" cluster
	I1212 21:06:53.726637  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:53.729576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:53.732517  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:53.732614  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:53.732589  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:53.732947  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:53.732979  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:53.733130  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.769116  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:53.769147  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:53.769168  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:53.769196  449185 start.go:360] acquireMachinesLock for ha-008703-m04: {Name:mk62cc2a2cc2e6d3b3f47556aaddea9ef719055b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:53.769254  449185 start.go:364] duration metric: took 38.549µs to acquireMachinesLock for "ha-008703-m04"
	I1212 21:06:53.769277  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:53.769289  449185 fix.go:54] fixHost starting: m04
	I1212 21:06:53.769545  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:53.786769  449185 fix.go:112] recreateIfNeeded on ha-008703-m04: state=Stopped err=<nil>
	W1212 21:06:53.786801  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:53.789926  449185 out.go:252] * Restarting existing docker container for "ha-008703-m04" ...
	I1212 21:06:53.790089  449185 cli_runner.go:164] Run: docker start ha-008703-m04
	I1212 21:06:54.156965  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:54.178693  449185 kic.go:430] container "ha-008703-m04" state is running.
	I1212 21:06:54.179092  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:54.203905  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:54.204146  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:54.204209  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:54.236695  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:54.237065  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:54.237081  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:54.237686  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:06:57.432360  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.432405  449185 ubuntu.go:182] provisioning hostname "ha-008703-m04"
	I1212 21:06:57.432471  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.466545  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.466905  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.466917  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m04 && echo "ha-008703-m04" | sudo tee /etc/hostname
	I1212 21:06:57.695949  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.696057  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.725675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.725993  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.726015  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:57.922048  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:57.922076  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:57.922097  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:57.922108  449185 provision.go:84] configureAuth start
	I1212 21:06:57.922191  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:57.949300  449185 provision.go:143] copyHostCerts
	I1212 21:06:57.949346  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949379  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:57.949390  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949467  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:57.949557  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949579  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:57.949590  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949619  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:57.949669  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949692  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:57.949702  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949735  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:57.949797  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m04 san=[127.0.0.1 192.168.49.5 ha-008703-m04 localhost minikube]
	I1212 21:06:58.253055  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:58.253130  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:58.253185  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.272770  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:58.384265  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:58.384326  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:58.432775  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:58.432846  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:58.468705  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:58.468769  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:06:58.498893  449185 provision.go:87] duration metric: took 576.767506ms to configureAuth
	I1212 21:06:58.498961  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:58.499231  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:58.499373  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.531077  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:58.531395  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:58.531411  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:59.036280  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:59.036310  449185 machine.go:97] duration metric: took 4.83214688s to provisionDockerMachine
	I1212 21:06:59.036331  449185 start.go:293] postStartSetup for "ha-008703-m04" (driver="docker")
	I1212 21:06:59.036343  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:59.036466  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:59.036523  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.086256  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.217706  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:59.225272  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:59.225304  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:59.225326  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:59.225398  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:59.225489  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:59.225502  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:59.225626  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:59.239694  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:59.289259  449185 start.go:296] duration metric: took 252.894748ms for postStartSetup
	I1212 21:06:59.289353  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:59.289435  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.318501  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.433235  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:59.440975  449185 fix.go:56] duration metric: took 5.671680345s for fixHost
	I1212 21:06:59.441000  449185 start.go:83] releasing machines lock for "ha-008703-m04", held for 5.671734343s
	I1212 21:06:59.441074  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:59.473221  449185 out.go:179] * Found network options:
	I1212 21:06:59.477821  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1212 21:06:59.480861  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480899  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480912  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480936  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480956  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480968  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:59.481044  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:59.481089  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.481371  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:59.481425  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.521656  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.528821  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.865561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:59.874595  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:59.874667  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:59.887303  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:59.887378  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:59.887427  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:59.887500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:59.908986  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:59.940196  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:59.940301  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:59.959663  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:59.976282  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:07:00.307427  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:07:00.569417  449185 docker.go:234] disabling docker service ...
	I1212 21:07:00.569500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:07:00.607031  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:07:00.633272  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:07:00.844907  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:07:01.084528  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:07:01.108001  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:07:01.130446  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:07:01.130569  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.145280  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:07:01.145425  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.165912  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.178770  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.192394  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:07:01.203182  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.214233  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.224343  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.236075  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:07:01.246300  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:07:01.256331  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:01.516203  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:07:01.766997  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:07:01.767119  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:07:01.776270  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:07:01.776437  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:07:01.784745  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:07:01.824822  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:07:01.824977  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.889046  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.956065  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:07:01.959062  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:07:01.962079  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:07:01.964978  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1212 21:07:01.967779  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:07:01.996732  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:07:02.001678  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.020405  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:07:02.020654  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:02.020930  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:07:02.039611  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:07:02.039893  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.5
	I1212 21:07:02.039901  449185 certs.go:195] generating shared ca certs ...
	I1212 21:07:02.039915  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:07:02.040028  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:07:02.040067  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:07:02.040078  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:07:02.040092  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:07:02.040104  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:07:02.040116  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:07:02.040169  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:07:02.040202  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:07:02.040210  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:07:02.040237  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:07:02.040261  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:07:02.040288  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:07:02.040334  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:07:02.040380  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.040396  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.040407  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.040424  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:07:02.066397  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:07:02.105376  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:07:02.137944  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:07:02.170023  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:07:02.210932  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:07:02.238540  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:07:02.269874  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:07:02.281063  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.291218  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:07:02.301041  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308712  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308786  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.368311  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:07:02.378631  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.387217  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:07:02.398975  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403766  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403869  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.470421  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:07:02.480522  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.493373  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:07:02.510638  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516014  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516150  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.591218  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:07:02.600904  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:07:02.619811  449185 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:07:02.619887  449185 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1212 21:07:02.619990  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:07:02.620088  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:07:02.636422  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:07:02.636540  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 21:07:02.650400  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:07:02.684861  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:07:02.708803  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:07:02.713707  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.731184  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.010394  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.061651  449185 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 21:07:03.062018  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:03.067183  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:07:03.070801  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.406466  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.471431  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:07:03.471508  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:07:03.471736  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505163  449185 node_ready.go:49] node "ha-008703-m04" is "Ready"
	I1212 21:07:03.505194  449185 node_ready.go:38] duration metric: took 33.438197ms for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505209  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:07:03.505266  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:07:03.526122  449185 system_svc.go:56] duration metric: took 20.904535ms WaitForService to wait for kubelet
	I1212 21:07:03.526155  449185 kubeadm.go:587] duration metric: took 464.111537ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:07:03.526175  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:07:03.582671  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582703  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582714  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582719  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582723  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582727  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582731  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582735  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582741  449185 node_conditions.go:105] duration metric: took 56.560779ms to run NodePressure ...
	I1212 21:07:03.582752  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:07:03.582774  449185 start.go:256] writing updated cluster config ...
	I1212 21:07:03.583086  449185 ssh_runner.go:195] Run: rm -f paused
	I1212 21:07:03.601326  449185 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:03.602059  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:07:03.627964  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640449  449185 pod_ready.go:94] pod "coredns-66bc5c9577-8tvqx" is "Ready"
	I1212 21:07:03.640525  449185 pod_ready.go:86] duration metric: took 12.481008ms for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640551  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.647941  449185 pod_ready.go:94] pod "coredns-66bc5c9577-kls2t" is "Ready"
	I1212 21:07:03.648021  449185 pod_ready.go:86] duration metric: took 7.447403ms for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.734522  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742549  449185 pod_ready.go:94] pod "etcd-ha-008703" is "Ready"
	I1212 21:07:03.742645  449185 pod_ready.go:86] duration metric: took 8.036611ms for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742670  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751107  449185 pod_ready.go:94] pod "etcd-ha-008703-m02" is "Ready"
	I1212 21:07:03.751180  449185 pod_ready.go:86] duration metric: took 8.490203ms for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751203  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.802884  449185 request.go:683] "Waited before sending request" delay="51.579039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-008703-m03"
	I1212 21:07:04.003143  449185 request.go:683] "Waited before sending request" delay="191.298042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:04.008011  449185 pod_ready.go:94] pod "etcd-ha-008703-m03" is "Ready"
	I1212 21:07:04.008105  449185 pod_ready.go:86] duration metric: took 256.8794ms for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.203542  449185 request.go:683] "Waited before sending request" delay="195.301148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1212 21:07:04.208571  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.402858  449185 request.go:683] "Waited before sending request" delay="194.13984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703"
	I1212 21:07:04.603054  449185 request.go:683] "Waited before sending request" delay="196.30777ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:04.607366  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703" is "Ready"
	I1212 21:07:04.607392  449185 pod_ready.go:86] duration metric: took 398.743662ms for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.607403  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.802681  449185 request.go:683] "Waited before sending request" delay="195.203703ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m02"
	I1212 21:07:05.004599  449185 request.go:683] "Waited before sending request" delay="198.050663ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:05.009883  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m02" is "Ready"
	I1212 21:07:05.009916  449185 pod_ready.go:86] duration metric: took 402.505715ms for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.009927  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.203348  449185 request.go:683] "Waited before sending request" delay="193.318894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m03"
	I1212 21:07:05.402598  449185 request.go:683] "Waited before sending request" delay="195.266325ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:05.407026  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m03" is "Ready"
	I1212 21:07:05.407054  449185 pod_ready.go:86] duration metric: took 397.119016ms for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.603514  449185 request.go:683] "Waited before sending request" delay="196.332041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1212 21:07:05.609335  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.802598  449185 request.go:683] "Waited before sending request" delay="193.136821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703"
	I1212 21:07:06.002969  449185 request.go:683] "Waited before sending request" delay="196.400711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:06.009868  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703" is "Ready"
	I1212 21:07:06.009898  449185 pod_ready.go:86] duration metric: took 400.534916ms for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.009910  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.203284  449185 request.go:683] "Waited before sending request" delay="193.288724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m02"
	I1212 21:07:06.403087  449185 request.go:683] "Waited before sending request" delay="195.335069ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:06.406992  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m02" is "Ready"
	I1212 21:07:06.407024  449185 pod_ready.go:86] duration metric: took 397.103754ms for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.407035  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.603444  449185 request.go:683] "Waited before sending request" delay="196.318585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m03"
	I1212 21:07:06.803243  449185 request.go:683] "Waited before sending request" delay="196.311315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:06.811152  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m03" is "Ready"
	I1212 21:07:06.811182  449185 pod_ready.go:86] duration metric: took 404.13997ms for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.003659  449185 request.go:683] "Waited before sending request" delay="192.369133ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1212 21:07:07.008682  449185 pod_ready.go:83] waiting for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.203112  449185 request.go:683] "Waited before sending request" delay="194.317566ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26llr"
	I1212 21:07:07.403112  449185 request.go:683] "Waited before sending request" delay="196.188213ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m04"
	I1212 21:07:07.406710  449185 pod_ready.go:94] pod "kube-proxy-26llr" is "Ready"
	I1212 21:07:07.406741  449185 pod_ready.go:86] duration metric: took 398.024461ms for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.406752  449185 pod_ready.go:83] waiting for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.603217  449185 request.go:683] "Waited before sending request" delay="196.391784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5cjcj"
	I1212 21:07:07.802591  449185 request.go:683] "Waited before sending request" delay="195.268704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:07.806437  449185 pod_ready.go:94] pod "kube-proxy-5cjcj" is "Ready"
	I1212 21:07:07.806468  449185 pod_ready.go:86] duration metric: took 399.70889ms for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.806478  449185 pod_ready.go:83] waiting for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.003374  449185 request.go:683] "Waited before sending request" delay="196.807041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgx5j"
	I1212 21:07:08.203254  449185 request.go:683] "Waited before sending request" delay="193.281921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:08.206488  449185 pod_ready.go:94] pod "kube-proxy-tgx5j" is "Ready"
	I1212 21:07:08.206516  449185 pod_ready.go:86] duration metric: took 400.031584ms for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.206527  449185 pod_ready.go:83] waiting for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.402890  449185 request.go:683] "Waited before sending request" delay="196.283952ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v8lm4"
	I1212 21:07:08.602890  449185 request.go:683] "Waited before sending request" delay="190.306444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:08.606678  449185 pod_ready.go:94] pod "kube-proxy-v8lm4" is "Ready"
	I1212 21:07:08.606704  449185 pod_ready.go:86] duration metric: took 400.170499ms for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.803166  449185 request.go:683] "Waited before sending request" delay="196.329375ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1212 21:07:08.807939  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.006982  449185 request.go:683] "Waited before sending request" delay="198.916082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703"
	I1212 21:07:09.203284  449185 request.go:683] "Waited before sending request" delay="192.346692ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:09.206489  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703" is "Ready"
	I1212 21:07:09.206522  449185 pod_ready.go:86] duration metric: took 398.549635ms for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.206532  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.402973  449185 request.go:683] "Waited before sending request" delay="196.306934ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m02"
	I1212 21:07:09.603345  449185 request.go:683] "Waited before sending request" delay="192.346225ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:09.611536  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m02" is "Ready"
	I1212 21:07:09.611565  449185 pod_ready.go:86] duration metric: took 405.026929ms for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.611575  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.802963  449185 request.go:683] "Waited before sending request" delay="191.311533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m03"
	I1212 21:07:10.004827  449185 request.go:683] "Waited before sending request" delay="198.485333ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:10.012647  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m03" is "Ready"
	I1212 21:07:10.012677  449185 pod_ready.go:86] duration metric: took 401.094897ms for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:10.012691  449185 pod_ready.go:40] duration metric: took 6.411220695s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:10.085120  449185 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 21:07:10.090453  449185 out.go:179] * Done! kubectl is now configured to use "ha-008703" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.084643835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f025e76-4eca-4fb1-b55a-f8d9a43fa536 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087572223Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087672564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095689671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.0959013Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/passwd: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095933095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/group: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.096211382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.136290189Z" level=info msg="Created container 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.137414204Z" level=info msg="Starting container: 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145" id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.14248122Z" level=info msg="Started container" PID=1398 containerID=5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145 description=kube-system/storage-provisioner/storage-provisioner id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.077353049Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.084667544Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090321422Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090434276Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.101511448Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108846054Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108901554Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125800597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125957924Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.126043537Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133398738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133546145Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133624332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148814452Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148949928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5129752cc0a67       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   24 seconds ago       Running             storage-provisioner       2                   1b6b1faf503c8       storage-provisioner                 kube-system
	3f4c5923951e8       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   56 seconds ago       Running             busybox                   1                   9a656c52a260b       busybox-7b57f96db7-tczdt            default
	560dd3383ed66       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   56 seconds ago       Running             coredns                   1                   2f24e16e55927       coredns-66bc5c9577-8tvqx            kube-system
	7cef3eaf30308       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   56 seconds ago       Running             kindnet-cni               1                   021217a0cf931       kindnet-f7h24                       kube-system
	82dd101ece4d1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   56 seconds ago       Exited              storage-provisioner       1                   1b6b1faf503c8       storage-provisioner                 kube-system
	ad94d81034c43       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   56 seconds ago       Running             coredns                   1                   b75479f05351c       coredns-66bc5c9577-kls2t            kube-system
	2b11faa987b07       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   56 seconds ago       Running             kube-proxy                1                   66c81b9e2ff38       kube-proxy-tgx5j                    kube-system
	f08cf114510a2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   56 seconds ago       Running             kube-controller-manager   8                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	93fc3054083af       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Running             kube-apiserver            8                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	05ba874359221       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Running             kube-scheduler            2                   60ffed268d568       kube-scheduler-ha-008703            kube-system
	6e71e63256727       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            7                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	62a05b797d322       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   About a minute ago   Running             kube-vip                  1                   8e01afee41b4c       kube-vip-ha-008703                  kube-system
	03159ef735d03       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   7                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	e2542b7b3b0ad       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Running             etcd                      3                   e36007e1324cc       etcd-ha-008703                      kube-system
	
	
	==> coredns [560dd3383ed66f823e585260ec4823152488386a1e71bacea6cd9ca156adb2d8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52286 - 29430 "HINFO IN 4498128949033305171.1950480245235256825. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020264931s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ad94d81034c434b44c842f2117ddb8a51227d702a250a41dac1fac6dcf4f0e1c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36509 - 26980 "HINFO IN 2040533104487656964.3099826236879850204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003954694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-008703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:06:20 +0000   Fri, 12 Dec 2025 20:52:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-008703
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                6ff1a8bd-14d1-41ae-8cb8-9156f60dd654
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tczdt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 coredns-66bc5c9577-8tvqx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 coredns-66bc5c9577-kls2t             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     15m
	  kube-system                 etcd-ha-008703                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-f7h24                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      15m
	  kube-system                 kube-apiserver-ha-008703             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-008703    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-tgx5j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-scheduler-ha-008703             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-008703                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   Starting                 15m                kube-proxy       
	  Normal   NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m (x8 over 15m)  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Warning  CgroupV1                 15m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 15m                kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    15m                kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  15m                kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     15m                kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           15m                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   NodeReady                14m                kubelet          Node ha-008703 status is now: NodeReady
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   Starting                 92s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 92s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  92s (x8 over 92s)  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    92s (x8 over 92s)  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     92s (x8 over 92s)  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           52s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           16s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	
	
	Name:               ha-008703-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-008703-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                ca808c21-ecc5-4ee7-9940-dffdef1da5b2
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hltw8                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-008703-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-blbfb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-008703-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-008703-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-5cjcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-008703-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-008703-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 35s                kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   Starting                 14m                kube-proxy       
	  Normal   RegisteredNode           14m                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           14m                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 10m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           10m                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 88s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           53s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           52s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           16s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	
	
	Name:               ha-008703-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_54_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:54:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:09 +0000   Fri, 12 Dec 2025 20:54:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-008703-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                fa4c05be-b5d2-4bf0-a4b6-630b820e0e0a
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kc6ms                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-ha-008703-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         13m
	  kube-system                 kindnet-6dvv4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-ha-008703-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-ha-008703-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-v8lm4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-ha-008703-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-vip-ha-008703-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 13m                kube-proxy       
	  Normal   Starting                 3s                 kube-proxy       
	  Normal   CIDRAssignmentFailed     13m                cidrAllocator    Node ha-008703-m03 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           13m                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           53s                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           52s                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   Starting                 51s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 51s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node ha-008703-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16s                node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	
	
	Name:               ha-008703-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_55_24_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:55:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:07:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:08 +0000   Fri, 12 Dec 2025 20:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-008703-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                8a9366c1-4fff-44a3-a6b8-824607a69efc
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fwsws       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-proxy-26llr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   NodeHasSufficientPID     11m (x3 over 11m)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   CIDRAssignmentFailed     11m                cidrAllocator    Node ha-008703-m04 status is now: CIDRAssignmentFailed
	  Normal   Starting                 11m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 11m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x3 over 11m)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x3 over 11m)  kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           11m                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   NodeReady                11m                kubelet          Node ha-008703-m04 status is now: NodeReady
	  Normal   RegisteredNode           10m                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           53s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           52s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   Starting                 23s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 23s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20s (x8 over 23s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20s (x8 over 23s)  kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20s (x8 over 23s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	
	
	==> dmesg <==
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	[Dec12 20:52] overlayfs: idmapped layers are currently not supported
	[ +33.094252] overlayfs: idmapped layers are currently not supported
	[Dec12 20:53] overlayfs: idmapped layers are currently not supported
	[Dec12 20:55] overlayfs: idmapped layers are currently not supported
	[Dec12 20:56] overlayfs: idmapped layers are currently not supported
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	[Dec12 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.613273] overlayfs: idmapped layers are currently not supported
	[Dec12 21:06] overlayfs: idmapped layers are currently not supported
	[Dec12 21:07] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2] <==
	{"level":"warn","ts":"2025-12-12T21:06:33.065790Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:34.385240Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"warn","ts":"2025-12-12T21:06:37.066825Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:37.066883Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:37.766673Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:37.766690Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:41.068742Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:41.068796Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:42.766800Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:42.766892Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:45.070740Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:45.070818Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:47.767522Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:47.767544Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:49.072862Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.4:2380/version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:49.072916Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"61bc3757651ee949","error":"Get \"https://192.168.49.4:2380/version\": dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"info","ts":"2025-12-12T21:06:52.518541Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"61bc3757651ee949","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-12T21:06:52.518591Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.518603Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.527855Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"61bc3757651ee949","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-12T21:06:52.527959Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.573914Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"info","ts":"2025-12-12T21:06:52.574238Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"61bc3757651ee949"}
	{"level":"warn","ts":"2025-12-12T21:06:52.767676Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-12-12T21:06:52.767687Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"61bc3757651ee949","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: connection refused"}
	
	
	==> kernel <==
	 21:07:18 up  3:49,  0 user,  load average: 4.74, 1.94, 1.23
	Linux ha-008703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cef3eaf30308ab6e267a8568bc724dbe47546cc79d171e489dd52fca0f76a09] <==
	E1212 21:06:52.117526       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 21:06:52.117654       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 21:06:52.126134       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1212 21:06:53.716520       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 21:06:53.716636       1 metrics.go:72] Registering metrics
	I1212 21:06:53.716756       1 controller.go:711] "Syncing nftables rules"
	I1212 21:07:02.075035       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:07:02.075188       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:07:02.075398       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.49.3 Flags: [] Table: 0 Realm: 0} 
	I1212 21:07:02.075556       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:07:02.075607       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:07:02.075742       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I1212 21:07:02.075878       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:07:02.075929       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:07:02.076051       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 192.168.49.5 Flags: [] Table: 0 Realm: 0} 
	I1212 21:07:02.076199       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:07:02.076238       1 main.go:301] handling current node
	I1212 21:07:12.074942       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:07:12.074981       1 main.go:301] handling current node
	I1212 21:07:12.074999       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:07:12.075015       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:07:12.075206       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:07:12.075219       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:07:12.075288       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:07:12.075298       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f] <==
	I1212 21:05:47.565735       1 server.go:150] Version: v1.34.2
	I1212 21:05:47.569343       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1212 21:05:49.281036       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1212 21:05:49.281145       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1212 21:05:49.281179       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1212 21:05:49.281210       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1212 21:05:49.281240       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1212 21:05:49.281267       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1212 21:05:49.281295       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1212 21:05:49.281322       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1212 21:05:49.281350       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1212 21:05:49.281379       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1212 21:05:49.281408       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1212 21:05:49.281437       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1212 21:05:49.315159       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:05:49.315278       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 21:05:49.320436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1212 21:05:49.332820       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:05:49.333128       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1212 21:05:49.333192       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1212 21:05:49.333470       1 instance.go:239] Using reconciler: lease
	W1212 21:05:49.335311       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1212 21:06:09.334486       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [93fc3054083af7a4f11519559898692bcb87a0a869c0e823fd96f50def2f02cd] <==
	I1212 21:06:20.368230       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 21:06:20.400872       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 21:06:20.412450       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 21:06:20.421494       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 21:06:20.413161       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 21:06:20.433292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 21:06:20.435830       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 21:06:20.439607       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 21:06:20.439971       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 21:06:20.446200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 21:06:20.446507       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 21:06:20.451816       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 21:06:20.466902       1 cache.go:39] Caches are synced for autoregister controller
	W1212 21:06:20.494872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I1212 21:06:20.498501       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 21:06:20.540491       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 21:06:20.544831       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1212 21:06:20.560023       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1212 21:06:20.915382       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 21:06:21.151536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 21:06:24.277503       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1212 21:06:26.132404       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 21:06:26.286031       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 21:06:26.435234       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	W1212 21:06:34.277202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-controller-manager [03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d] <==
	I1212 21:05:49.621747       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:05:50.751392       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1212 21:05:50.752418       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:05:50.756190       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 21:05:50.756306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 21:05:50.756352       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 21:05:50.756362       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1212 21:06:20.286877       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [f08cf114510a22705e6eddaabf72535ab357ca9404fe3342c1903bc51578da78] <==
	I1212 21:06:25.947009       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-008703"
	I1212 21:06:25.947060       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 21:06:25.946360       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 21:06:25.948255       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 21:06:25.948778       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 21:06:25.949912       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1212 21:06:25.956884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 21:06:25.956955       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 21:06:25.958970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 21:06:25.962893       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 21:06:25.966650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 21:06:25.966831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 21:06:25.966929       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 21:06:25.970777       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 21:06:25.977116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 21:06:25.978294       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 21:06:25.978569       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 21:06:25.979499       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 21:06:25.983384       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 21:06:25.991347       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 21:06:25.992778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 21:06:26.003403       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 21:06:26.005063       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:07:03.404820       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-88mnq\": the object has been modified; please apply your changes to the latest version and try again"
	I1212 21:07:03.412728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0e70dacf-1fbe-4ce7-930f-4790639720ae", APIVersion:"v1", ResourceVersion:"293", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-88mnq": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [2b11faa987b07a654a1ecb1110634491c33e925915fa00680eccd4a7874806fc] <==
	I1212 21:06:23.734028       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:06:24.050201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:06:24.251547       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:06:24.251592       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 21:06:24.251667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:06:24.378453       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:06:24.378516       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:06:24.392940       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:06:24.393314       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:06:24.393544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:06:24.394794       1 config.go:200] "Starting service config controller"
	I1212 21:06:24.394851       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:06:24.394892       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:06:24.394921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:06:24.394957       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:06:24.394983       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:06:24.395714       1 config.go:309] "Starting node config controller"
	I1212 21:06:24.398250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:06:24.398321       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:06:24.497136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:06:24.497308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:06:24.497322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b] <==
	I1212 21:06:20.248139       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 21:06:20.248183       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:06:20.270188       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:06:20.270295       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:06:20.276803       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 21:06:20.277005       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1212 21:06:20.368920       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1212 21:06:20.369035       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:06:20.369105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 21:06:20.369154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:06:20.369207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:06:20.369802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:06:20.369869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:06:20.369925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:06:20.369973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:06:20.370030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:06:20.370079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:06:20.370124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:06:20.371252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:06:20.371299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:06:20.371338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:06:20.438949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:06:20.444983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:06:20.445109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1212 21:06:20.470730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.676261     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-008703\" already exists" pod="kube-system/kube-controller-manager-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.676518     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.684227     764 apiserver.go:52] "Watching apiserver"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.715180     764 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-008703" podUID="13ad7cce-3343-4a6d-b066-b55715ef2727"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.733772     764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c574b029f9f86252bb40df91aa285cf" path="/var/lib/kubelet/pods/4c574b029f9f86252bb40df91aa285cf/volumes"
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.737750     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-008703\" already exists" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772520     764 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772704     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.789443     764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.857272     764 scope.go:117] "RemoveContainer" containerID="03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891614     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-xtables-lock\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891885     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-lib-modules\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892133     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-xtables-lock\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892297     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d57f23f-4461-4d86-b91f-e2628d8874ab-tmp\") pod \"storage-provisioner\" (UID: \"2d57f23f-4461-4d86-b91f-e2628d8874ab\") " pod="kube-system/storage-provisioner"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892406     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-cni-cfg\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.898926     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-lib-modules\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.897461     764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-008703" podStartSLOduration=0.897445384 podStartE2EDuration="897.445384ms" podCreationTimestamp="2025-12-12 21:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 21:06:20.850652974 +0000 UTC m=+34.291145116" watchObservedRunningTime="2025-12-12 21:06:20.897445384 +0000 UTC m=+34.337937510"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.972495     764 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.192647     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e WatchSource:0}: Error finding container b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e: Status 404 returned error can't find the container with id b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.402414     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226 WatchSource:0}: Error finding container 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226: Status 404 returned error can't find the container with id 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.434279     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced WatchSource:0}: Error finding container 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced: Status 404 returned error can't find the container with id 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.570067     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967 WatchSource:0}: Error finding container 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967: Status 404 returned error can't find the container with id 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967
	Dec 12 21:06:46 ha-008703 kubelet[764]: E1212 21:06:46.699197     764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50"
	Dec 12 21:06:46 ha-008703 kubelet[764]: I1212 21:06:46.699251     764 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist"
	Dec 12 21:06:53 ha-008703 kubelet[764]: I1212 21:06:53.074350     764 scope.go:117] "RemoveContainer" containerID="82dd101ece4d11a82b5e84808cb05db3a78e943db22ae1196fbeeda7f49c4b53"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703
helpers_test.go:270: (dbg) Run:  kubectl --context ha-008703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (4.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (91.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 node add --control-plane --alsologtostderr -v 5
E1212 21:07:27.141605  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:07:35.805708  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:07:44.066202  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 node add --control-plane --alsologtostderr -v 5: (1m26.665458355s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5: (1.467577031s)
ha_test.go:618: status says not all three control-plane nodes are present: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-008703-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:621: status says not all four hosts are running: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-008703-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:624: status says not all four kubelets are running: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-008703-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha_test.go:627: status says not all three apiservers are running: args "out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5": ha-008703
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m02
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-008703-m04
type: Worker
host: Running
kubelet: Running

                                                
                                                
ha-008703-m05
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449316,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:05:39.880681825Z",
	            "FinishedAt": "2025-12-12T21:05:38.645326548Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56820d5d7e78ec2f02da47e339541c9ef651db5d532d64770a21ce2bbb5446a4",
	            "SandboxKey": "/var/run/docker/netns/56820d5d7e78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:e7:89:49:21:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "3c6a3818203b2804ed1a97d15e01e57b58ac1b4d017d616dc02dd9125b0a0f3c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:253: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 logs -n 25: (1.722587415s)
helpers_test.go:261: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt                                                            │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt                                                │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node start m02 --alsologtostderr -v 5                                                                                     │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:57 UTC │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │ 12 Dec 25 20:57 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5                                                                                  │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ node    │ ha-008703 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:05 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:07 UTC │
	│ node    │ ha-008703 node add --control-plane --alsologtostderr -v 5                                                                           │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:07 UTC │ 12 Dec 25 21:08 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:05:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:05:39.605178  449185 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:05:39.605402  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605430  449185 out.go:374] Setting ErrFile to fd 2...
	I1212 21:05:39.605450  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605864  449185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:05:39.606369  449185 out.go:368] Setting JSON to false
	I1212 21:05:39.607946  449185 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13692,"bootTime":1765559848,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:05:39.608060  449185 start.go:143] virtualization:  
	I1212 21:05:39.611335  449185 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:05:39.615242  449185 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:05:39.615314  449185 notify.go:221] Checking for updates...
	I1212 21:05:39.621077  449185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:05:39.623949  449185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:39.626804  449185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:05:39.629715  449185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:05:39.632603  449185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:05:39.635954  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:39.636566  449185 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:05:39.669276  449185 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:05:39.669398  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.732289  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.722148611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.732454  449185 docker.go:319] overlay module found
	I1212 21:05:39.735677  449185 out.go:179] * Using the docker driver based on existing profile
	I1212 21:05:39.738449  449185 start.go:309] selected driver: docker
	I1212 21:05:39.738468  449185 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.738617  449185 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:05:39.738715  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.793928  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.784653162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.794497  449185 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:05:39.794535  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:39.794590  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:39.794655  449185 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.797771  449185 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 21:05:39.800532  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:39.803460  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:39.806386  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:39.806435  449185 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 21:05:39.806449  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:39.806468  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:39.806557  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:39.806568  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:39.806736  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:39.826241  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:39.826266  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:39.826283  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:39.826317  449185 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:39.826376  449185 start.go:364] duration metric: took 38.285µs to acquireMachinesLock for "ha-008703"
	I1212 21:05:39.826401  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:39.826407  449185 fix.go:54] fixHost starting: 
	I1212 21:05:39.826688  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:39.844490  449185 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 21:05:39.844521  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:39.847711  449185 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 21:05:39.847788  449185 cli_runner.go:164] Run: docker start ha-008703
	I1212 21:05:40.139310  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:40.163240  449185 kic.go:430] container "ha-008703" state is running.
	I1212 21:05:40.163662  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:40.191201  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:40.191459  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:40.191534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:40.219354  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:40.219684  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:40.219693  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:40.220585  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:05:43.371942  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.371968  449185 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 21:05:43.372054  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.389586  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.389913  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.389930  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 21:05:43.553625  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.553711  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.571751  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.572079  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.572102  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:43.724831  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:43.724856  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:43.724884  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:43.724903  449185 provision.go:84] configureAuth start
	I1212 21:05:43.724977  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:43.743377  449185 provision.go:143] copyHostCerts
	I1212 21:05:43.743421  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743463  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:43.743471  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743550  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:43.743646  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743662  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:43.743667  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743692  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:43.743751  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743767  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:43.743771  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743797  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:43.743859  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 21:05:43.832472  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:43.832541  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:43.832590  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.850299  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:43.956285  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:43.956420  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:43.974303  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:43.974381  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 21:05:43.992649  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:43.992714  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:05:44.013810  449185 provision.go:87] duration metric: took 288.892734ms to configureAuth
	I1212 21:05:44.013838  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:44.014088  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:44.014212  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.036649  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:44.037017  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:44.037041  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:44.386038  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:44.386060  449185 machine.go:97] duration metric: took 4.194590859s to provisionDockerMachine
	I1212 21:05:44.386072  449185 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 21:05:44.386084  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:44.386193  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:44.386264  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.403386  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.508670  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:44.512195  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:44.512221  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:44.512236  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:44.512291  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:44.512398  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:44.512408  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:44.512511  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:44.520678  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:44.539590  449185 start.go:296] duration metric: took 153.501859ms for postStartSetup
	I1212 21:05:44.539670  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:44.539734  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.557736  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.661664  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:44.666383  449185 fix.go:56] duration metric: took 4.839968923s for fixHost
	I1212 21:05:44.666409  449185 start.go:83] releasing machines lock for "ha-008703", held for 4.840020362s
	I1212 21:05:44.666477  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:44.684762  449185 ssh_runner.go:195] Run: cat /version.json
	I1212 21:05:44.684817  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.685079  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:44.685134  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.708523  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.712753  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.904198  449185 ssh_runner.go:195] Run: systemctl --version
	I1212 21:05:44.910603  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:44.946561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:44.951022  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:44.951140  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:44.959060  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:44.959085  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:44.959118  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:44.959164  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:44.974739  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:44.987642  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:44.987758  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:45.005197  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:45.023356  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:45.187771  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:45.360312  449185 docker.go:234] disabling docker service ...
	I1212 21:05:45.360416  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:45.382556  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:45.397072  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:45.515232  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:45.630674  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:45.644319  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:45.659761  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:45.659839  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.669217  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:45.669329  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.678932  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.691100  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.701211  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:45.710201  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.720671  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.729634  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.739187  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:45.747460  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:45.755441  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:45.880049  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:46.064833  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:46.064907  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:46.068969  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:46.069037  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:46.072837  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:46.098607  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:46.098708  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.128236  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.158573  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:46.161391  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:46.178132  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:46.181932  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.192021  449185 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:05:46.192177  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:46.192251  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.227916  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.227942  449185 crio.go:433] Images already preloaded, skipping extraction
	I1212 21:05:46.227998  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.253605  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.253629  449185 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:05:46.253638  449185 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 21:05:46.253742  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:46.253823  449185 ssh_runner.go:195] Run: crio config
	I1212 21:05:46.327816  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:46.327839  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:46.327863  449185 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:05:46.327893  449185 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:05:46.328051  449185 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:05:46.328077  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:46.328142  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:46.341034  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:46.341215  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:46.341284  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:46.349457  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:46.349531  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 21:05:46.357340  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 21:05:46.371153  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:46.384332  449185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 21:05:46.397565  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:46.411895  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:46.415692  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.426113  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:46.540637  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:46.557178  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 21:05:46.557202  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:46.557219  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:46.557365  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:46.557420  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:46.557434  449185 certs.go:257] generating profile certs ...
	I1212 21:05:46.557525  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:46.557600  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 21:05:46.557649  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:46.557662  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:46.557674  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:46.557688  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:46.557703  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:46.557714  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:46.557731  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:46.557752  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:46.557770  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:46.557824  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:46.557861  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:46.557873  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:46.557901  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:46.557930  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:46.557955  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:46.558003  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:46.558037  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.558052  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.558066  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.558628  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:46.581904  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:46.602655  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:46.623772  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:46.644667  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:46.670849  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:46.690125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:46.719167  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:46.743203  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:46.764296  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:46.788880  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:46.807678  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:05:46.822196  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:46.829401  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.838655  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:46.847305  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851571  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851686  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.894892  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:46.903217  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.911071  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:46.919222  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923110  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923186  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.964916  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:46.972957  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.980730  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:46.989130  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993540  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993610  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:47.036478  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:47.044309  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:47.048593  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:47.091048  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:47.132635  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:47.184472  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:47.233316  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:47.289483  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:47.363953  449185 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:47.364111  449185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:05:47.364177  449185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:05:47.424432  449185 cri.go:89] found id: "05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b"
	I1212 21:05:47.424457  449185 cri.go:89] found id: "6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f"
	I1212 21:05:47.424463  449185 cri.go:89] found id: "62a05b797d32258dc4368afc3978a5b3f463b4eafed6049189130af79138e299"
	I1212 21:05:47.424466  449185 cri.go:89] found id: "03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	I1212 21:05:47.424469  449185 cri.go:89] found id: "e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2"
	I1212 21:05:47.424473  449185 cri.go:89] found id: ""
	I1212 21:05:47.424525  449185 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 21:05:47.441549  449185 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:05:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:05:47.441640  449185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:05:47.453706  449185 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:05:47.453729  449185 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:05:47.453787  449185 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:05:47.466638  449185 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:47.467064  449185 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.467171  449185 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 21:05:47.467570  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.468100  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:05:47.468627  449185 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:05:47.468649  449185 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:05:47.468655  449185 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:05:47.468661  449185 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:05:47.468665  449185 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:05:47.468983  449185 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:05:47.469097  449185 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 21:05:47.477581  449185 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 21:05:47.477605  449185 kubeadm.go:602] duration metric: took 23.869575ms to restartPrimaryControlPlane
	I1212 21:05:47.477614  449185 kubeadm.go:403] duration metric: took 113.6735ms to StartCluster
	I1212 21:05:47.477631  449185 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.477689  449185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.478278  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.478485  449185 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:47.478512  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:05:47.478526  449185 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:05:47.479081  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.484597  449185 out.go:179] * Enabled addons: 
	I1212 21:05:47.487542  449185 addons.go:530] duration metric: took 9.010305ms for enable addons: enabled=[]
	I1212 21:05:47.487605  449185 start.go:247] waiting for cluster config update ...
	I1212 21:05:47.487614  449185 start.go:256] writing updated cluster config ...
	I1212 21:05:47.491098  449185 out.go:203] 
	I1212 21:05:47.494772  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.494914  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.498660  449185 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 21:05:47.501545  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:47.504535  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:47.507691  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:47.507726  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:47.507835  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:47.507851  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:47.507972  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.508202  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:47.538497  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:47.538521  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:47.538535  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:47.538559  449185 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:47.538627  449185 start.go:364] duration metric: took 48.131µs to acquireMachinesLock for "ha-008703-m02"
	I1212 21:05:47.538652  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:47.538660  449185 fix.go:54] fixHost starting: m02
	I1212 21:05:47.538948  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:47.574023  449185 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 21:05:47.574053  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:47.577557  449185 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 21:05:47.577655  449185 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 21:05:47.980330  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:48.008294  449185 kic.go:430] container "ha-008703-m02" state is running.
	I1212 21:05:48.008939  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:48.047188  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:48.047422  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:48.047478  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:48.078749  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:48.079063  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:48.079074  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:48.079845  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44600->127.0.0.1:33207: read: connection reset by peer
	I1212 21:05:51.328699  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.328723  449185 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 21:05:51.328784  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.373011  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.373328  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.373339  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 21:05:51.672250  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.672411  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.697392  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.697707  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.697724  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:51.885149  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:51.885219  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:51.885252  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:51.885290  449185 provision.go:84] configureAuth start
	I1212 21:05:51.885368  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:51.907559  449185 provision.go:143] copyHostCerts
	I1212 21:05:51.907599  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907631  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:51.907638  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907718  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:51.907797  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907814  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:51.907820  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907846  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:51.907886  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907901  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:51.907905  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907929  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:51.907973  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 21:05:52.137179  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:52.137300  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:52.137386  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.156094  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:52.288849  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:52.288913  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:05:52.342195  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:52.342258  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:05:52.393562  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:52.393620  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:52.445696  449185 provision.go:87] duration metric: took 560.374153ms to configureAuth
	I1212 21:05:52.445764  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:52.446027  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:52.446170  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.478675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:52.478980  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:52.478993  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:53.000008  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:53.000110  449185 machine.go:97] duration metric: took 4.952677944s to provisionDockerMachine
	I1212 21:05:53.000138  449185 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 21:05:53.000177  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:53.000293  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:53.000358  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.020786  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.128335  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:53.131751  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:53.131783  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:53.131795  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:53.131855  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:53.131934  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:53.131947  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:53.132049  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:53.139844  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:53.158393  449185 start.go:296] duration metric: took 158.21332ms for postStartSetup
	I1212 21:05:53.158474  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:53.158534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.176037  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.281959  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:53.287302  449185 fix.go:56] duration metric: took 5.74863443s for fixHost
	I1212 21:05:53.287331  449185 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.748691916s
	I1212 21:05:53.287402  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:53.307739  449185 out.go:179] * Found network options:
	I1212 21:05:53.310522  449185 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 21:05:53.313363  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:05:53.313414  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:05:53.313489  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:53.313533  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.313574  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:53.313632  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.336547  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.336799  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.542870  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:53.567799  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:53.567925  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:53.589478  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:53.589553  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:53.589598  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:53.589671  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:53.609030  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:53.638599  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:53.638724  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:53.668742  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:53.694088  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:53.934693  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:54.164277  449185 docker.go:234] disabling docker service ...
	I1212 21:05:54.164417  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:54.185997  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:54.207462  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:54.437335  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:54.661473  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:54.679927  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:54.707742  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:54.707861  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.723319  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:54.723443  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.740396  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.751373  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.768858  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:54.780854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.795944  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.808854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.818935  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:54.833159  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:54.849406  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:55.082636  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:55.362814  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:55.362938  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:55.366812  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:55.366918  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:55.370570  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:55.399084  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:55.399168  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.428944  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.460814  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:55.463826  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:05:55.466808  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:55.495103  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:55.503442  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:55.518854  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:05:55.519096  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:55.519362  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:55.545294  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:05:55.545592  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 21:05:55.545608  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:55.545622  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:55.545735  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:55.545785  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:55.545796  449185 certs.go:257] generating profile certs ...
	I1212 21:05:55.545885  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:55.545952  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 21:05:55.546008  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:55.546022  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:55.546043  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:55.546059  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:55.546082  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:55.546098  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:55.546112  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:55.546126  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:55.546142  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:55.546197  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:55.546246  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:55.546262  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:55.546293  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:55.546320  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:55.546354  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:55.546415  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:55.546463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:55.546490  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:55.546515  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:55.546583  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:55.568767  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:55.668715  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:05:55.672576  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:05:55.680945  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:05:55.684500  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:05:55.693000  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:05:55.696718  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:05:55.704917  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:05:55.708459  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:05:55.717032  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:05:55.720547  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:05:55.728907  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:05:55.732537  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:05:55.740854  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:55.760026  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:55.778517  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:55.797624  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:55.817142  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:55.835385  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:55.853338  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:55.872093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:55.890019  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:55.908331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:55.926030  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:55.944002  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:05:55.956838  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:05:55.969593  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:05:55.982132  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:05:55.995578  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:05:56.013190  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:05:56.026969  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:05:56.040988  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:56.047942  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.056004  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:56.064163  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068273  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068362  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.109836  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:56.118260  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.126352  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:56.134010  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137848  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137914  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.179470  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:56.187587  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.195301  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:56.203258  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207359  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207467  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.248706  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:56.256310  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:56.260190  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:56.306385  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:56.347361  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:56.389865  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:56.430835  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:56.472973  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:56.521282  449185 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 21:05:56.521453  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:56.521498  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:56.521575  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:56.534831  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:56.534951  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:56.535047  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:56.543116  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:56.543223  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:05:56.551463  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:05:56.566227  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:56.579329  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:56.592969  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:56.596983  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:56.607297  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.744346  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.759793  449185 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:56.760120  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:56.766599  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:05:56.769234  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.908410  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.923082  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:05:56.923202  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:05:56.923464  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 21:06:06.924664  449185 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:06:10.340284  449185 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:06:20.254665  449185 node_ready.go:49] node "ha-008703-m02" is "Ready"
	I1212 21:06:20.254694  449185 node_ready.go:38] duration metric: took 23.33118731s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:06:20.254707  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:20.254768  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:20.278828  449185 api_server.go:72] duration metric: took 23.518673135s to wait for apiserver process to appear ...
	I1212 21:06:20.278854  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:20.278876  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.361760  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:06:20.361785  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:06:20.779312  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.809650  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:20.809728  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.279043  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.326274  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.326348  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.779606  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.811129  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.811210  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.279504  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.299466  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.299549  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.779116  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.797946  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.798028  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.279662  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.308514  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.308642  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.779220  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.800333  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.800429  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:24.278995  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:24.291485  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:24.307186  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:24.307278  449185 api_server.go:131] duration metric: took 4.028399738s to wait for apiserver health ...
	I1212 21:06:24.307306  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:24.326207  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:24.326317  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326341  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326383  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.326404  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.326425  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.326458  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.326482  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.326502  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.326524  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.326559  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326604  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326624  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.326647  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326684  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326711  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.326732  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.326752  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.326770  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.326797  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.326828  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.326851  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.326870  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.326900  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.326923  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.326944  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.326964  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.326987  449185 system_pods.go:74] duration metric: took 19.648646ms to wait for pod list to return data ...
	I1212 21:06:24.327025  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:24.345476  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:24.345542  449185 default_sa.go:55] duration metric: took 18.497613ms for default service account to be created ...
	I1212 21:06:24.345567  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:24.441449  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:24.441494  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441509  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441517  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.441529  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.441537  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.441542  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.441549  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.441553  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.441557  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.441564  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441576  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441580  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.441592  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441601  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441606  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.441612  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.441616  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.441620  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.441627  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.441631  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.441646  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.441650  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.441654  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.441665  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.441671  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.441675  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.441684  449185 system_pods.go:126] duration metric: took 96.098139ms to wait for k8s-apps to be running ...
	I1212 21:06:24.441697  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:24.441755  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:24.458749  449185 system_svc.go:56] duration metric: took 17.042535ms WaitForService to wait for kubelet
	I1212 21:06:24.458826  449185 kubeadm.go:587] duration metric: took 27.69867432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:24.458863  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:24.463250  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463295  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463308  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463313  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463317  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463322  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463325  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463330  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463334  449185 node_conditions.go:105] duration metric: took 4.443929ms to run NodePressure ...
	I1212 21:06:24.463360  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:24.463389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:24.467450  449185 out.go:203] 
	I1212 21:06:24.471714  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:24.471840  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.475478  449185 out.go:179] * Starting "ha-008703-m03" control-plane node in "ha-008703" cluster
	I1212 21:06:24.479357  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:24.482576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:24.485573  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:24.485605  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:24.485687  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:24.485718  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:24.485736  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:24.485861  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.512091  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:24.512112  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:24.512126  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:24.512153  449185 start.go:360] acquireMachinesLock for ha-008703-m03: {Name:mkc4792dc097e09b497b46fff7452c5b0b6f70aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:24.512210  449185 start.go:364] duration metric: took 41.255µs to acquireMachinesLock for "ha-008703-m03"
	I1212 21:06:24.512230  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:24.512237  449185 fix.go:54] fixHost starting: m03
	I1212 21:06:24.512562  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.547705  449185 fix.go:112] recreateIfNeeded on ha-008703-m03: state=Stopped err=<nil>
	W1212 21:06:24.547736  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:24.551016  449185 out.go:252] * Restarting existing docker container for "ha-008703-m03" ...
	I1212 21:06:24.551124  449185 cli_runner.go:164] Run: docker start ha-008703-m03
	I1212 21:06:24.918317  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.943282  449185 kic.go:430] container "ha-008703-m03" state is running.
	I1212 21:06:24.944655  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:24.976163  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.976462  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:24.976536  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:25.007740  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:25.008073  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:25.008082  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:25.008934  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45896->127.0.0.1:33212: read: connection reset by peer
	I1212 21:06:28.195900  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.195925  449185 ubuntu.go:182] provisioning hostname "ha-008703-m03"
	I1212 21:06:28.195992  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.238514  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.238834  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.238851  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m03 && echo "ha-008703-m03" | sudo tee /etc/hostname
	I1212 21:06:28.479384  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.479480  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.507106  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.507416  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.507437  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:28.751314  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:28.751390  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:28.751429  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:28.751469  449185 provision.go:84] configureAuth start
	I1212 21:06:28.751595  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:28.780423  449185 provision.go:143] copyHostCerts
	I1212 21:06:28.780473  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780506  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:28.780519  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780599  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:28.780687  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780712  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:28.780720  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780749  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:28.780795  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780816  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:28.780823  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780848  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:28.780902  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m03 san=[127.0.0.1 192.168.49.4 ha-008703-m03 localhost minikube]
	I1212 21:06:29.132570  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:29.132679  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:29.132752  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.161077  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:29.290001  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:29.290063  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:29.326015  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:29.326077  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:29.373017  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:29.373102  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:06:29.430671  449185 provision.go:87] duration metric: took 679.168963ms to configureAuth
	I1212 21:06:29.430700  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:29.430943  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:29.431050  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.464440  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:29.464756  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:29.464775  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:30.522791  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:30.522817  449185 machine.go:97] duration metric: took 5.546337341s to provisionDockerMachine
	I1212 21:06:30.522830  449185 start.go:293] postStartSetup for "ha-008703-m03" (driver="docker")
	I1212 21:06:30.522841  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:30.522923  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:30.522969  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.541196  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.648836  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:30.652559  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:30.652598  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:30.652624  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:30.652708  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:30.652823  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:30.652833  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:30.652939  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:30.661331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:30.687281  449185 start.go:296] duration metric: took 164.433925ms for postStartSetup
	I1212 21:06:30.687373  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:30.687421  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.713364  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.821971  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:30.827033  449185 fix.go:56] duration metric: took 6.314788872s for fixHost
	I1212 21:06:30.827061  449185 start.go:83] releasing machines lock for "ha-008703-m03", held for 6.314842198s
	I1212 21:06:30.827140  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:30.847749  449185 out.go:179] * Found network options:
	I1212 21:06:30.850465  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1212 21:06:30.853486  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853520  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853545  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853558  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:30.853630  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:30.853672  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.853950  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:30.854006  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.875211  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.901708  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:31.084053  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:31.089338  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:31.089442  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:31.098288  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:31.098362  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:31.098418  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:31.098504  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:31.115825  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:31.132457  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:31.132578  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:31.150352  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:31.166465  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:06:31.301826  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:06:31.519838  449185 docker.go:234] disabling docker service ...
	I1212 21:06:31.519963  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:06:31.552895  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:06:31.586883  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:06:31.921487  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:06:32.171189  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:06:32.196225  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:06:32.218996  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:06:32.219066  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.231170  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:06:32.231254  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.264701  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.278943  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.293177  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:06:32.313973  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.323884  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.333399  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.345640  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:06:32.354606  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:06:32.378038  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:32.601691  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:06:32.867254  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:06:32.867377  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:06:32.871734  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:06:32.871807  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:06:32.875400  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:06:32.900774  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:06:32.900910  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.930896  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.972077  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:06:32.974985  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:06:32.977916  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:06:32.980878  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:06:32.998829  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:06:33.008314  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:33.019604  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:06:33.019853  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:33.020130  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:06:33.050582  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:06:33.050909  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.4
	I1212 21:06:33.050924  449185 certs.go:195] generating shared ca certs ...
	I1212 21:06:33.050954  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:06:33.051090  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:06:33.051141  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:06:33.051152  449185 certs.go:257] generating profile certs ...
	I1212 21:06:33.051239  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:06:33.051314  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.77152b1c
	I1212 21:06:33.051365  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:06:33.051374  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:06:33.051387  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:06:33.051401  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:06:33.051418  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:06:33.051430  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:06:33.051446  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:06:33.051463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:06:33.051479  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:06:33.051535  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:06:33.051571  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:06:33.051584  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:06:33.051615  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:06:33.051643  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:06:33.051671  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:06:33.051721  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:33.051757  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.051774  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.051785  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.051851  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:06:33.071355  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:06:33.180711  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:06:33.184847  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:06:33.194292  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:06:33.198466  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:06:33.207132  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:06:33.210762  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:06:33.219366  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:06:33.222902  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:06:33.231254  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:06:33.235252  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:06:33.245320  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:06:33.249647  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:06:33.259234  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:06:33.282501  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:06:33.308249  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:06:33.330512  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:06:33.350745  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:06:33.371841  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:06:33.392489  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:06:33.415260  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:06:33.435093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:06:33.455125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:06:33.475775  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:06:33.503119  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:06:33.519902  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:06:33.541097  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:06:33.558546  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:06:33.580936  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:06:33.604112  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:06:33.628438  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:06:33.645138  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:06:33.653214  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.661760  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:06:33.672498  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677561  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677637  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.725658  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:06:33.734300  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.742147  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:06:33.750364  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754312  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754435  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.795883  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:06:33.803561  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.811944  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:06:33.819768  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823821  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823917  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.869341  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:06:33.877525  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:06:33.881524  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:06:33.923421  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:06:33.965151  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:06:34.007958  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:06:34.056315  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:06:34.099324  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:06:34.142509  449185 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1212 21:06:34.142710  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:06:34.142750  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:06:34.142821  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:06:34.155586  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:06:34.155655  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:06:34.155735  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:06:34.164504  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:06:34.164593  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:06:34.172960  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:06:34.187238  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:06:34.202155  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:06:34.217531  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:06:34.221916  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:34.232222  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.409764  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.425465  449185 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:06:34.426019  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:34.429018  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:06:34.431984  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.608481  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.623603  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:06:34.623719  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:06:34.623971  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627483  449185 node_ready.go:49] node "ha-008703-m03" is "Ready"
	I1212 21:06:34.627510  449185 node_ready.go:38] duration metric: took 3.502711ms for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627524  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:34.627583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.127774  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.627665  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.128468  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.628211  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.128314  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.627991  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.127766  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.627868  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.128698  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.128648  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.627740  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.128354  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.628245  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.130632  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.627827  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.128583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.627968  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.128136  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.628605  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.128568  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.627727  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.128033  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.627763  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.128250  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.127920  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.628389  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.127872  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.628485  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.127813  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.627737  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.128714  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.628186  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.128495  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.627734  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.128077  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.628172  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.643287  449185 api_server.go:72] duration metric: took 19.217761741s to wait for apiserver process to appear ...
	I1212 21:06:53.643310  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:53.643330  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:53.653231  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:53.654408  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:53.654429  449185 api_server.go:131] duration metric: took 11.111371ms to wait for apiserver health ...
	I1212 21:06:53.654438  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:53.664181  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:53.664268  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664292  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664326  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.664350  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.664399  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.664423  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.664447  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.664476  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.664511  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.664543  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.664562  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.664586  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.664617  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.664639  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.664655  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.664672  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.664692  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.664722  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.664747  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.664767  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.664786  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.664806  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.664833  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.664856  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.664876  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.664898  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.664934  449185 system_pods.go:74] duration metric: took 10.478512ms to wait for pod list to return data ...
	I1212 21:06:53.664963  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:53.672021  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:53.672087  449185 default_sa.go:55] duration metric: took 7.103458ms for default service account to be created ...
	I1212 21:06:53.672114  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:53.683734  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:53.683818  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683843  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683876  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.683898  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.683916  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.683935  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.683958  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.683985  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.684009  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.684028  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.684048  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.684069  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.684096  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.684121  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.684144  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.684165  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.684195  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.684216  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.684234  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.684254  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.684274  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.684305  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.684334  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.684356  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.684505  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.684532  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.684555  449185 system_pods.go:126] duration metric: took 12.421784ms to wait for k8s-apps to be running ...
	I1212 21:06:53.684581  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:53.684664  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:53.707726  449185 system_svc.go:56] duration metric: took 23.13631ms WaitForService to wait for kubelet
	I1212 21:06:53.707794  449185 kubeadm.go:587] duration metric: took 19.282272877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:53.707828  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:53.713066  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713138  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713167  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713189  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713224  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713251  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713272  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713294  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713315  449185 node_conditions.go:105] duration metric: took 5.4683ms to run NodePressure ...
	I1212 21:06:53.713355  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:53.713389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:53.716967  449185 out.go:203] 
	I1212 21:06:53.720156  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:53.720328  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.723670  449185 out.go:179] * Starting "ha-008703-m04" worker node in "ha-008703" cluster
	I1212 21:06:53.726637  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:53.729576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:53.732517  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:53.732614  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:53.732589  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:53.732947  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:53.732979  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:53.733130  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.769116  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:53.769147  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:53.769168  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:53.769196  449185 start.go:360] acquireMachinesLock for ha-008703-m04: {Name:mk62cc2a2cc2e6d3b3f47556aaddea9ef719055b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:53.769254  449185 start.go:364] duration metric: took 38.549µs to acquireMachinesLock for "ha-008703-m04"
	I1212 21:06:53.769277  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:53.769289  449185 fix.go:54] fixHost starting: m04
	I1212 21:06:53.769545  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:53.786769  449185 fix.go:112] recreateIfNeeded on ha-008703-m04: state=Stopped err=<nil>
	W1212 21:06:53.786801  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:53.789926  449185 out.go:252] * Restarting existing docker container for "ha-008703-m04" ...
	I1212 21:06:53.790089  449185 cli_runner.go:164] Run: docker start ha-008703-m04
	I1212 21:06:54.156965  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:54.178693  449185 kic.go:430] container "ha-008703-m04" state is running.
	I1212 21:06:54.179092  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:54.203905  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:54.204146  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:54.204209  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:54.236695  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:54.237065  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:54.237081  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:54.237686  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:06:57.432360  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.432405  449185 ubuntu.go:182] provisioning hostname "ha-008703-m04"
	I1212 21:06:57.432471  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.466545  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.466905  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.466917  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m04 && echo "ha-008703-m04" | sudo tee /etc/hostname
	I1212 21:06:57.695949  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.696057  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.725675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.725993  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.726015  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:57.922048  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:57.922076  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:57.922097  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:57.922108  449185 provision.go:84] configureAuth start
	I1212 21:06:57.922191  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:57.949300  449185 provision.go:143] copyHostCerts
	I1212 21:06:57.949346  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949379  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:57.949390  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949467  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:57.949557  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949579  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:57.949590  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949619  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:57.949669  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949692  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:57.949702  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949735  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:57.949797  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m04 san=[127.0.0.1 192.168.49.5 ha-008703-m04 localhost minikube]
	I1212 21:06:58.253055  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:58.253130  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:58.253185  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.272770  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:58.384265  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:58.384326  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:58.432775  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:58.432846  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:58.468705  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:58.468769  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:06:58.498893  449185 provision.go:87] duration metric: took 576.767506ms to configureAuth
	I1212 21:06:58.498961  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:58.499231  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:58.499373  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.531077  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:58.531395  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:58.531411  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:59.036280  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:59.036310  449185 machine.go:97] duration metric: took 4.83214688s to provisionDockerMachine
	I1212 21:06:59.036331  449185 start.go:293] postStartSetup for "ha-008703-m04" (driver="docker")
	I1212 21:06:59.036343  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:59.036466  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:59.036523  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.086256  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.217706  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:59.225272  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:59.225304  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:59.225326  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:59.225398  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:59.225489  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:59.225502  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:59.225626  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:59.239694  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:59.289259  449185 start.go:296] duration metric: took 252.894748ms for postStartSetup
	I1212 21:06:59.289353  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:59.289435  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.318501  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.433235  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:59.440975  449185 fix.go:56] duration metric: took 5.671680345s for fixHost
	I1212 21:06:59.441000  449185 start.go:83] releasing machines lock for "ha-008703-m04", held for 5.671734343s
	I1212 21:06:59.441074  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:59.473221  449185 out.go:179] * Found network options:
	I1212 21:06:59.477821  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1212 21:06:59.480861  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480899  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480912  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480936  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480956  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480968  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:59.481044  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:59.481089  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.481371  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:59.481425  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.521656  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.528821  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.865561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:59.874595  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:59.874667  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:59.887303  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:59.887378  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:59.887427  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:59.887500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:59.908986  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:59.940196  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:59.940301  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:59.959663  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:59.976282  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:07:00.307427  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:07:00.569417  449185 docker.go:234] disabling docker service ...
	I1212 21:07:00.569500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:07:00.607031  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:07:00.633272  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:07:00.844907  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:07:01.084528  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:07:01.108001  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:07:01.130446  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:07:01.130569  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.145280  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:07:01.145425  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.165912  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.178770  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.192394  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:07:01.203182  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.214233  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.224343  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.236075  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:07:01.246300  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:07:01.256331  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:01.516203  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:07:01.766997  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:07:01.767119  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:07:01.776270  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:07:01.776437  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:07:01.784745  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:07:01.824822  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:07:01.824977  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.889046  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.956065  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:07:01.959062  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:07:01.962079  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:07:01.964978  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1212 21:07:01.967779  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:07:01.996732  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:07:02.001678  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.020405  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:07:02.020654  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:02.020930  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:07:02.039611  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:07:02.039893  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.5
	I1212 21:07:02.039901  449185 certs.go:195] generating shared ca certs ...
	I1212 21:07:02.039915  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:07:02.040028  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:07:02.040067  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:07:02.040078  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:07:02.040092  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:07:02.040104  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:07:02.040116  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:07:02.040169  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:07:02.040202  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:07:02.040210  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:07:02.040237  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:07:02.040261  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:07:02.040288  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:07:02.040334  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:07:02.040380  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.040396  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.040407  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.040424  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:07:02.066397  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:07:02.105376  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:07:02.137944  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:07:02.170023  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:07:02.210932  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:07:02.238540  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:07:02.269874  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:07:02.281063  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.291218  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:07:02.301041  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308712  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308786  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.368311  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:07:02.378631  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.387217  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:07:02.398975  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403766  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403869  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.470421  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:07:02.480522  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.493373  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:07:02.510638  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516014  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516150  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.591218  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:07:02.600904  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:07:02.619811  449185 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:07:02.619887  449185 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1212 21:07:02.619990  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:07:02.620088  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:07:02.636422  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:07:02.636540  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 21:07:02.650400  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:07:02.684861  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:07:02.708803  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:07:02.713707  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.731184  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.010394  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.061651  449185 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 21:07:03.062018  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:03.067183  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:07:03.070801  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.406466  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.471431  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:07:03.471508  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:07:03.471736  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505163  449185 node_ready.go:49] node "ha-008703-m04" is "Ready"
	I1212 21:07:03.505194  449185 node_ready.go:38] duration metric: took 33.438197ms for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505209  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:07:03.505266  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:07:03.526122  449185 system_svc.go:56] duration metric: took 20.904535ms WaitForService to wait for kubelet
	I1212 21:07:03.526155  449185 kubeadm.go:587] duration metric: took 464.111537ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:07:03.526175  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:07:03.582671  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582703  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582714  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582719  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582723  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582727  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582731  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582735  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582741  449185 node_conditions.go:105] duration metric: took 56.560779ms to run NodePressure ...
	I1212 21:07:03.582752  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:07:03.582774  449185 start.go:256] writing updated cluster config ...
	I1212 21:07:03.583086  449185 ssh_runner.go:195] Run: rm -f paused
	I1212 21:07:03.601326  449185 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:03.602059  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:07:03.627964  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640449  449185 pod_ready.go:94] pod "coredns-66bc5c9577-8tvqx" is "Ready"
	I1212 21:07:03.640525  449185 pod_ready.go:86] duration metric: took 12.481008ms for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640551  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.647941  449185 pod_ready.go:94] pod "coredns-66bc5c9577-kls2t" is "Ready"
	I1212 21:07:03.648021  449185 pod_ready.go:86] duration metric: took 7.447403ms for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.734522  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742549  449185 pod_ready.go:94] pod "etcd-ha-008703" is "Ready"
	I1212 21:07:03.742645  449185 pod_ready.go:86] duration metric: took 8.036611ms for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742670  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751107  449185 pod_ready.go:94] pod "etcd-ha-008703-m02" is "Ready"
	I1212 21:07:03.751180  449185 pod_ready.go:86] duration metric: took 8.490203ms for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751203  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.802884  449185 request.go:683] "Waited before sending request" delay="51.579039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-008703-m03"
	I1212 21:07:04.003143  449185 request.go:683] "Waited before sending request" delay="191.298042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:04.008011  449185 pod_ready.go:94] pod "etcd-ha-008703-m03" is "Ready"
	I1212 21:07:04.008105  449185 pod_ready.go:86] duration metric: took 256.8794ms for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.203542  449185 request.go:683] "Waited before sending request" delay="195.301148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1212 21:07:04.208571  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.402858  449185 request.go:683] "Waited before sending request" delay="194.13984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703"
	I1212 21:07:04.603054  449185 request.go:683] "Waited before sending request" delay="196.30777ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:04.607366  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703" is "Ready"
	I1212 21:07:04.607392  449185 pod_ready.go:86] duration metric: took 398.743662ms for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.607403  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.802681  449185 request.go:683] "Waited before sending request" delay="195.203703ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m02"
	I1212 21:07:05.004599  449185 request.go:683] "Waited before sending request" delay="198.050663ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:05.009883  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m02" is "Ready"
	I1212 21:07:05.009916  449185 pod_ready.go:86] duration metric: took 402.505715ms for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.009927  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.203348  449185 request.go:683] "Waited before sending request" delay="193.318894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m03"
	I1212 21:07:05.402598  449185 request.go:683] "Waited before sending request" delay="195.266325ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:05.407026  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m03" is "Ready"
	I1212 21:07:05.407054  449185 pod_ready.go:86] duration metric: took 397.119016ms for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.603514  449185 request.go:683] "Waited before sending request" delay="196.332041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1212 21:07:05.609335  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.802598  449185 request.go:683] "Waited before sending request" delay="193.136821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703"
	I1212 21:07:06.002969  449185 request.go:683] "Waited before sending request" delay="196.400711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:06.009868  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703" is "Ready"
	I1212 21:07:06.009898  449185 pod_ready.go:86] duration metric: took 400.534916ms for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.009910  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.203284  449185 request.go:683] "Waited before sending request" delay="193.288724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m02"
	I1212 21:07:06.403087  449185 request.go:683] "Waited before sending request" delay="195.335069ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:06.406992  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m02" is "Ready"
	I1212 21:07:06.407024  449185 pod_ready.go:86] duration metric: took 397.103754ms for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.407035  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.603444  449185 request.go:683] "Waited before sending request" delay="196.318585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m03"
	I1212 21:07:06.803243  449185 request.go:683] "Waited before sending request" delay="196.311315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:06.811152  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m03" is "Ready"
	I1212 21:07:06.811182  449185 pod_ready.go:86] duration metric: took 404.13997ms for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.003659  449185 request.go:683] "Waited before sending request" delay="192.369133ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1212 21:07:07.008682  449185 pod_ready.go:83] waiting for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.203112  449185 request.go:683] "Waited before sending request" delay="194.317566ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26llr"
	I1212 21:07:07.403112  449185 request.go:683] "Waited before sending request" delay="196.188213ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m04"
	I1212 21:07:07.406710  449185 pod_ready.go:94] pod "kube-proxy-26llr" is "Ready"
	I1212 21:07:07.406741  449185 pod_ready.go:86] duration metric: took 398.024461ms for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.406752  449185 pod_ready.go:83] waiting for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.603217  449185 request.go:683] "Waited before sending request" delay="196.391784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5cjcj"
	I1212 21:07:07.802591  449185 request.go:683] "Waited before sending request" delay="195.268704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:07.806437  449185 pod_ready.go:94] pod "kube-proxy-5cjcj" is "Ready"
	I1212 21:07:07.806468  449185 pod_ready.go:86] duration metric: took 399.70889ms for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.806478  449185 pod_ready.go:83] waiting for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.003374  449185 request.go:683] "Waited before sending request" delay="196.807041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgx5j"
	I1212 21:07:08.203254  449185 request.go:683] "Waited before sending request" delay="193.281921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:08.206488  449185 pod_ready.go:94] pod "kube-proxy-tgx5j" is "Ready"
	I1212 21:07:08.206516  449185 pod_ready.go:86] duration metric: took 400.031584ms for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.206527  449185 pod_ready.go:83] waiting for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.402890  449185 request.go:683] "Waited before sending request" delay="196.283952ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v8lm4"
	I1212 21:07:08.602890  449185 request.go:683] "Waited before sending request" delay="190.306444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:08.606678  449185 pod_ready.go:94] pod "kube-proxy-v8lm4" is "Ready"
	I1212 21:07:08.606704  449185 pod_ready.go:86] duration metric: took 400.170499ms for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.803166  449185 request.go:683] "Waited before sending request" delay="196.329375ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1212 21:07:08.807939  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.006982  449185 request.go:683] "Waited before sending request" delay="198.916082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703"
	I1212 21:07:09.203284  449185 request.go:683] "Waited before sending request" delay="192.346692ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:09.206489  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703" is "Ready"
	I1212 21:07:09.206522  449185 pod_ready.go:86] duration metric: took 398.549635ms for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.206532  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.402973  449185 request.go:683] "Waited before sending request" delay="196.306934ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m02"
	I1212 21:07:09.603345  449185 request.go:683] "Waited before sending request" delay="192.346225ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:09.611536  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m02" is "Ready"
	I1212 21:07:09.611565  449185 pod_ready.go:86] duration metric: took 405.026929ms for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.611575  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.802963  449185 request.go:683] "Waited before sending request" delay="191.311533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m03"
	I1212 21:07:10.004827  449185 request.go:683] "Waited before sending request" delay="198.485333ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:10.012647  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m03" is "Ready"
	I1212 21:07:10.012677  449185 pod_ready.go:86] duration metric: took 401.094897ms for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:10.012691  449185 pod_ready.go:40] duration metric: took 6.411220695s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:10.085120  449185 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 21:07:10.090453  449185 out.go:179] * Done! kubectl is now configured to use "ha-008703" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.084643835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f025e76-4eca-4fb1-b55a-f8d9a43fa536 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087572223Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087672564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095689671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.0959013Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/passwd: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095933095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/group: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.096211382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.136290189Z" level=info msg="Created container 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.137414204Z" level=info msg="Starting container: 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145" id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.14248122Z" level=info msg="Started container" PID=1398 containerID=5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145 description=kube-system/storage-provisioner/storage-provisioner id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.077353049Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.084667544Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090321422Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090434276Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.101511448Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108846054Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108901554Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125800597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125957924Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.126043537Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133398738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133546145Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133624332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148814452Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148949928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5129752cc0a67       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   About a minute ago   Running             storage-provisioner       2                   1b6b1faf503c8       storage-provisioner                 kube-system
	3f4c5923951e8       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago        Running             busybox                   1                   9a656c52a260b       busybox-7b57f96db7-tczdt            default
	560dd3383ed66       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago        Running             coredns                   1                   2f24e16e55927       coredns-66bc5c9577-8tvqx            kube-system
	7cef3eaf30308       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago        Running             kindnet-cni               1                   021217a0cf931       kindnet-f7h24                       kube-system
	82dd101ece4d1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago        Exited              storage-provisioner       1                   1b6b1faf503c8       storage-provisioner                 kube-system
	ad94d81034c43       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago        Running             coredns                   1                   b75479f05351c       coredns-66bc5c9577-kls2t            kube-system
	2b11faa987b07       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   2 minutes ago        Running             kube-proxy                1                   66c81b9e2ff38       kube-proxy-tgx5j                    kube-system
	f08cf114510a2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   2 minutes ago        Running             kube-controller-manager   8                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	93fc3054083af       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   2 minutes ago        Running             kube-apiserver            8                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	05ba874359221       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   3 minutes ago        Running             kube-scheduler            2                   60ffed268d568       kube-scheduler-ha-008703            kube-system
	6e71e63256727       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   3 minutes ago        Exited              kube-apiserver            7                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	62a05b797d322       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   3 minutes ago        Running             kube-vip                  1                   8e01afee41b4c       kube-vip-ha-008703                  kube-system
	03159ef735d03       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago        Exited              kube-controller-manager   7                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	e2542b7b3b0ad       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   3 minutes ago        Running             etcd                      3                   e36007e1324cc       etcd-ha-008703                      kube-system
	
	
	==> coredns [560dd3383ed66f823e585260ec4823152488386a1e71bacea6cd9ca156adb2d8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52286 - 29430 "HINFO IN 4498128949033305171.1950480245235256825. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020264931s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ad94d81034c434b44c842f2117ddb8a51227d702a250a41dac1fac6dcf4f0e1c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36509 - 26980 "HINFO IN 2040533104487656964.3099826236879850204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003954694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-008703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-008703
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                6ff1a8bd-14d1-41ae-8cb8-9156f60dd654
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tczdt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-8tvqx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 coredns-66bc5c9577-kls2t             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-ha-008703                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-f7h24                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-008703             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-008703    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-tgx5j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-008703             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-008703                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m29s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 2m25s                kube-proxy       
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)    kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)    kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)    kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-008703 status is now: NodeReady
	  Normal   RegisteredNode           14m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   Starting                 3m3s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m3s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m3s (x8 over 3m3s)  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m3s (x8 over 3m3s)  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m3s (x8 over 3m3s)  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m24s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           2m23s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           107s                 node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           51s                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	
	
	Name:               ha-008703-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-008703-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                ca808c21-ecc5-4ee7-9940-dffdef1da5b2
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hltw8                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-008703-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         15m
	  kube-system                 kindnet-blbfb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-008703-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-controller-manager-ha-008703-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-proxy-5cjcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-008703-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         15m
	  kube-system                 kube-vip-ha-008703-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m6s                   kube-proxy       
	  Normal   Starting                 11m                    kube-proxy       
	  Normal   Starting                 15m                    kube-proxy       
	  Normal   RegisteredNode           15m                    node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           15m                    node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                    node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   Starting                 2m59s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m59s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  2m59s (x8 over 2m59s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m59s (x8 over 2m59s)  kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m59s (x8 over 2m59s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m24s                  node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           2m23s                  node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           107s                   node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           51s                    node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	
	
	Name:               ha-008703-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_54_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:54:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:08:31 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:08:31 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:08:31 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:08:31 +0000   Fri, 12 Dec 2025 20:54:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-008703-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                fa4c05be-b5d2-4bf0-a4b6-630b820e0e0a
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kc6ms                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-008703-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-6dvv4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-008703-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-008703-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-v8lm4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-008703-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-008703-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 95s                    kube-proxy       
	  Normal   CIDRAssignmentFailed     14m                    cidrAllocator    Node ha-008703-m03 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           14m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           2m24s                  node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           2m23s                  node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m22s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m22s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node ha-008703-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m22s (x8 over 2m22s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           107s                   node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           51s                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	
	
	Name:               ha-008703-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_55_24_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:55:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-008703-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                8a9366c1-4fff-44a3-a6b8-824607a69efc
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fwsws       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-26llr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 91s                  kube-proxy       
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)    kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   CIDRAssignmentFailed     13m                  cidrAllocator    Node ha-008703-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)    kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)    kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           13m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   NodeReady                12m                  kubelet          Node ha-008703-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           2m24s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           2m23s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   Starting                 114s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 114s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  111s (x8 over 114s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    111s (x8 over 114s)  kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     111s (x8 over 114s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           107s                 node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           51s                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	
	
	Name:               ha-008703-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T21_08_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 21:08:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-008703-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                217ce67c-c46d-4546-ab8f-db6ccfc738bf
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-008703-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         45s
	  kube-system                 kindnet-2dqw9                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      49s
	  kube-system                 kube-apiserver-ha-008703-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-controller-manager-ha-008703-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-l5ppw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 kube-scheduler-ha-008703-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-vip-ha-008703-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        43s   kube-proxy       
	  Normal  RegisteredNode  49s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	  Normal  RegisteredNode  48s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	  Normal  RegisteredNode  47s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	  Normal  RegisteredNode  46s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	[Dec12 20:52] overlayfs: idmapped layers are currently not supported
	[ +33.094252] overlayfs: idmapped layers are currently not supported
	[Dec12 20:53] overlayfs: idmapped layers are currently not supported
	[Dec12 20:55] overlayfs: idmapped layers are currently not supported
	[Dec12 20:56] overlayfs: idmapped layers are currently not supported
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	[Dec12 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.613273] overlayfs: idmapped layers are currently not supported
	[Dec12 21:06] overlayfs: idmapped layers are currently not supported
	[Dec12 21:07] overlayfs: idmapped layers are currently not supported
	[ +26.617506] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2] <==
	{"level":"warn","ts":"2025-12-12T21:07:47.405190Z","caller":"etcdhttp/peer.go:152","msg":"failed to promote a member","member-id":"3f1ca3d03b4df108","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-12-12T21:07:47.411112Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-12T21:07:47.411150Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.720775Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":2853,"remote-peer-id":"3f1ca3d03b4df108","bytes":4982666,"size":"5.0 MB"}
	{"level":"warn","ts":"2025-12-12T21:07:47.792749Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:07:47.837402Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:07:47.868799Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3f1ca3d03b4df108","error":"failed to write 3f1ca3d03b4df108 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:34384: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-12T21:07:47.869102Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.888004Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4547689838480847112 7042564765798820169 12593026477526642892 15833178754663563274)"}
	{"level":"info","ts":"2025-12-12T21:07:47.888231Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.888302Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.927151Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.196993Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.219871Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"warn","ts":"2025-12-12T21:07:48.246127Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3f1ca3d03b4df108","error":"failed to write 3f1ca3d03b4df108 on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:34368: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-12T21:07:48.246456Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.247353Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-12T21:07:48.247474Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.247512Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.262693Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-12T21:07:48.262741Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:56.975244Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-12T21:08:01.070631Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-12T21:08:05.250404Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-12T21:08:17.721286Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","bytes":4982666,"size":"5.0 MB","took":"31.235800309s"}
	
	
	==> kernel <==
	 21:08:49 up  3:51,  0 user,  load average: 2.02, 1.84, 1.27
	Linux ha-008703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cef3eaf30308ab6e267a8568bc724dbe47546cc79d171e489dd52fca0f76a09] <==
	I1212 21:08:22.074701       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:08:22.074795       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:08:22.074806       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:08:22.074924       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:08:22.074938       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:08:32.074889       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:08:32.074927       1 main.go:301] handling current node
	I1212 21:08:32.074942       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:08:32.074949       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:08:32.075315       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:08:32.075333       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:08:32.075628       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:08:32.075656       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:08:32.075875       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1212 21:08:32.075892       1 main.go:324] Node ha-008703-m05 has CIDR [10.244.4.0/24] 
	I1212 21:08:42.083775       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:08:42.083814       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:08:42.084010       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1212 21:08:42.084025       1 main.go:324] Node ha-008703-m05 has CIDR [10.244.4.0/24] 
	I1212 21:08:42.084102       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:08:42.084117       1 main.go:301] handling current node
	I1212 21:08:42.084130       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:08:42.084136       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:08:42.084199       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:08:42.084206       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f] <==
	I1212 21:05:47.565735       1 server.go:150] Version: v1.34.2
	I1212 21:05:47.569343       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1212 21:05:49.281036       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1212 21:05:49.281145       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1212 21:05:49.281179       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1212 21:05:49.281210       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1212 21:05:49.281240       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1212 21:05:49.281267       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1212 21:05:49.281295       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1212 21:05:49.281322       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1212 21:05:49.281350       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1212 21:05:49.281379       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1212 21:05:49.281408       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1212 21:05:49.281437       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1212 21:05:49.315159       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:05:49.315278       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 21:05:49.320436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1212 21:05:49.332820       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:05:49.333128       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1212 21:05:49.333192       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1212 21:05:49.333470       1 instance.go:239] Using reconciler: lease
	W1212 21:05:49.335311       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1212 21:06:09.334486       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [93fc3054083af7a4f11519559898692bcb87a0a869c0e823fd96f50def2f02cd] <==
	I1212 21:06:20.368230       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 21:06:20.400872       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 21:06:20.412450       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 21:06:20.421494       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 21:06:20.413161       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 21:06:20.433292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 21:06:20.435830       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 21:06:20.439607       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 21:06:20.439971       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 21:06:20.446200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 21:06:20.446507       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 21:06:20.451816       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 21:06:20.466902       1 cache.go:39] Caches are synced for autoregister controller
	W1212 21:06:20.494872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I1212 21:06:20.498501       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 21:06:20.540491       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 21:06:20.544831       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1212 21:06:20.560023       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1212 21:06:20.915382       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 21:06:21.151536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 21:06:24.277503       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1212 21:06:26.132404       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 21:06:26.286031       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 21:06:26.435234       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	W1212 21:06:34.277202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-controller-manager [03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d] <==
	I1212 21:05:49.621747       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:05:50.751392       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1212 21:05:50.752418       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:05:50.756190       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 21:05:50.756306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 21:05:50.756352       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 21:05:50.756362       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1212 21:06:20.286877       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [f08cf114510a22705e6eddaabf72535ab357ca9404fe3342c1903bc51578da78] <==
	I1212 21:06:25.956884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 21:06:25.956955       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 21:06:25.958970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 21:06:25.962893       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 21:06:25.966650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 21:06:25.966831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 21:06:25.966929       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 21:06:25.970777       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 21:06:25.977116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 21:06:25.978294       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 21:06:25.978569       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 21:06:25.979499       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 21:06:25.983384       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 21:06:25.991347       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 21:06:25.992778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 21:06:26.003403       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 21:06:26.005063       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:07:03.404820       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-88mnq\": the object has been modified; please apply your changes to the latest version and try again"
	I1212 21:07:03.412728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0e70dacf-1fbe-4ce7-930f-4790639720ae", APIVersion:"v1", ResourceVersion:"293", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-88mnq": the object has been modified; please apply your changes to the latest version and try again
	E1212 21:07:59.838789       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-7vpdp failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-7vpdp\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1212 21:08:00.368924       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-008703-m04"
	I1212 21:08:00.369535       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-008703-m05\" does not exist"
	I1212 21:08:00.462544       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-008703-m05" podCIDRs=["10.244.4.0/24"]
	I1212 21:08:00.966105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-008703-m05"
	I1212 21:08:46.095858       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-008703-m04"
	
	
	==> kube-proxy [2b11faa987b07a654a1ecb1110634491c33e925915fa00680eccd4a7874806fc] <==
	I1212 21:06:23.734028       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:06:24.050201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:06:24.251547       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:06:24.251592       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 21:06:24.251667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:06:24.378453       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:06:24.378516       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:06:24.392940       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:06:24.393314       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:06:24.393544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:06:24.394794       1 config.go:200] "Starting service config controller"
	I1212 21:06:24.394851       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:06:24.394892       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:06:24.394921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:06:24.394957       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:06:24.394983       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:06:24.395714       1 config.go:309] "Starting node config controller"
	I1212 21:06:24.398250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:06:24.398321       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:06:24.497136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:06:24.497308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:06:24.497322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b] <==
	E1212 21:06:20.369105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 21:06:20.369154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:06:20.369207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:06:20.369802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:06:20.369869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:06:20.369925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:06:20.369973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:06:20.370030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:06:20.370079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:06:20.370124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:06:20.371252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:06:20.371299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:06:20.371338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:06:20.438949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:06:20.444983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:06:20.445109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1212 21:06:20.470730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 21:08:00.700964       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2dqw9\": pod kindnet-2dqw9 is already assigned to node \"ha-008703-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-2dqw9" node="ha-008703-m05"
	E1212 21:08:00.711320       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 168f02be-0130-4f0b-8920-a4de479cff03(kube-system/kindnet-2dqw9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-2dqw9"
	E1212 21:08:00.711432       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2dqw9\": pod kindnet-2dqw9 is already assigned to node \"ha-008703-m05\"" logger="UnhandledError" pod="kube-system/kindnet-2dqw9"
	E1212 21:08:00.701131       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l5ppw\": pod kube-proxy-l5ppw is already assigned to node \"ha-008703-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l5ppw" node="ha-008703-m05"
	E1212 21:08:00.711520       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod e8b7e3de-7dbc-4512-abcb-5ec2ceffbac4(kube-system/kube-proxy-l5ppw) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-l5ppw"
	E1212 21:08:00.718215       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l5ppw\": pod kube-proxy-l5ppw is already assigned to node \"ha-008703-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-l5ppw"
	I1212 21:08:00.718284       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l5ppw" node="ha-008703-m05"
	I1212 21:08:00.718655       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2dqw9" node="ha-008703-m05"
	
	
	==> kubelet <==
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.676261     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-008703\" already exists" pod="kube-system/kube-controller-manager-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.676518     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.684227     764 apiserver.go:52] "Watching apiserver"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.715180     764 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-008703" podUID="13ad7cce-3343-4a6d-b066-b55715ef2727"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.733772     764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c574b029f9f86252bb40df91aa285cf" path="/var/lib/kubelet/pods/4c574b029f9f86252bb40df91aa285cf/volumes"
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.737750     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-008703\" already exists" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772520     764 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772704     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.789443     764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.857272     764 scope.go:117] "RemoveContainer" containerID="03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891614     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-xtables-lock\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891885     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-lib-modules\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892133     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-xtables-lock\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892297     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d57f23f-4461-4d86-b91f-e2628d8874ab-tmp\") pod \"storage-provisioner\" (UID: \"2d57f23f-4461-4d86-b91f-e2628d8874ab\") " pod="kube-system/storage-provisioner"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892406     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-cni-cfg\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.898926     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-lib-modules\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.897461     764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-008703" podStartSLOduration=0.897445384 podStartE2EDuration="897.445384ms" podCreationTimestamp="2025-12-12 21:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 21:06:20.850652974 +0000 UTC m=+34.291145116" watchObservedRunningTime="2025-12-12 21:06:20.897445384 +0000 UTC m=+34.337937510"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.972495     764 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.192647     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e WatchSource:0}: Error finding container b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e: Status 404 returned error can't find the container with id b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.402414     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226 WatchSource:0}: Error finding container 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226: Status 404 returned error can't find the container with id 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.434279     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced WatchSource:0}: Error finding container 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced: Status 404 returned error can't find the container with id 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.570067     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967 WatchSource:0}: Error finding container 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967: Status 404 returned error can't find the container with id 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967
	Dec 12 21:06:46 ha-008703 kubelet[764]: E1212 21:06:46.699197     764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50"
	Dec 12 21:06:46 ha-008703 kubelet[764]: I1212 21:06:46.699251     764 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist"
	Dec 12 21:06:53 ha-008703 kubelet[764]: I1212 21:06:53.074350     764 scope.go:117] "RemoveContainer" containerID="82dd101ece4d11a82b5e84808cb05db3a78e943db22ae1196fbeeda7f49c4b53"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703
helpers_test.go:270: (dbg) Run:  kubectl --context ha-008703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (91.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.482960847s)
ha_test.go:305: expected profile "ha-008703" in json of 'profile list' to include 4 nodes but have 5 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-008703\",\"Status\":\"HAppy\",\"Config\":{\"Name\":\"ha-008703\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfssh
ares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.2\",\"ClusterName\":\"ha-008703\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"I
P\":\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.49.4\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":false,\"Worker\":true},{\"Name\":\"m05\",\"IP\":\"192.168.49.6\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong
\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountM
Size\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-arm64 profile list --output json"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect ha-008703
helpers_test.go:244: (dbg) docker inspect ha-008703:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	        "Created": "2025-12-12T20:51:45.347520369Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 449316,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:05:39.880681825Z",
	            "FinishedAt": "2025-12-12T21:05:38.645326548Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hostname",
	        "HostsPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/hosts",
	        "LogPath": "/var/lib/docker/containers/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a-json.log",
	        "Name": "/ha-008703",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-008703:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-008703",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a",
	                "LowerDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ac584d9274415ada5ce85ae0c8865c049d4554359bf88c7b031c67d24d03018f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-008703",
	                "Source": "/var/lib/docker/volumes/ha-008703/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-008703",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-008703",
	                "name.minikube.sigs.k8s.io": "ha-008703",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "56820d5d7e78ec2f02da47e339541c9ef651db5d532d64770a21ce2bbb5446a4",
	            "SandboxKey": "/var/run/docker/netns/56820d5d7e78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33202"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33203"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33206"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33204"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33205"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-008703": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d6:e7:89:49:21:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ff7ed303f4da65b7f5bbe1449be583e134fa05bb2920a77ae31b6f437cc1bd4b",
	                    "EndpointID": "3c6a3818203b2804ed1a97d15e01e57b58ac1b4d017d616dc02dd9125b0a0f3c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-008703",
	                        "2ec03df03a30"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-008703 -n ha-008703
helpers_test.go:253: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 logs -n 25: (1.892941731s)
helpers_test.go:261: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt                                                            │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt                                                │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m02 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ cp      │ ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt              │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m04 sudo cat /home/docker/cp-test.txt                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ ssh     │ ha-008703 ssh -n ha-008703-m03 sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node stop m02 --alsologtostderr -v 5                                                                                      │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:56 UTC │
	│ node    │ ha-008703 node start m02 --alsologtostderr -v 5                                                                                     │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:56 UTC │ 12 Dec 25 20:57 UTC │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │ 12 Dec 25 20:57 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5                                                                                  │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 20:57 UTC │                     │
	│ node    │ ha-008703 node list --alsologtostderr -v 5                                                                                          │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ node    │ ha-008703 node delete m03 --alsologtostderr -v 5                                                                                    │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │                     │
	│ stop    │ ha-008703 stop --alsologtostderr -v 5                                                                                               │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:05 UTC │
	│ start   │ ha-008703 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                        │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:05 UTC │ 12 Dec 25 21:07 UTC │
	│ node    │ ha-008703 node add --control-plane --alsologtostderr -v 5                                                                           │ ha-008703 │ jenkins │ v1.37.0 │ 12 Dec 25 21:07 UTC │ 12 Dec 25 21:08 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:05:39
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:05:39.605178  449185 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:05:39.605402  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605430  449185 out.go:374] Setting ErrFile to fd 2...
	I1212 21:05:39.605450  449185 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:05:39.605864  449185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:05:39.606369  449185 out.go:368] Setting JSON to false
	I1212 21:05:39.607946  449185 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":13692,"bootTime":1765559848,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:05:39.608060  449185 start.go:143] virtualization:  
	I1212 21:05:39.611335  449185 out.go:179] * [ha-008703] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:05:39.615242  449185 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:05:39.615314  449185 notify.go:221] Checking for updates...
	I1212 21:05:39.621077  449185 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:05:39.623949  449185 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:39.626804  449185 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:05:39.629715  449185 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:05:39.632603  449185 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:05:39.635954  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:39.636566  449185 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:05:39.669276  449185 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:05:39.669398  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.732289  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.722148611 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.732454  449185 docker.go:319] overlay module found
	I1212 21:05:39.735677  449185 out.go:179] * Using the docker driver based on existing profile
	I1212 21:05:39.738449  449185 start.go:309] selected driver: docker
	I1212 21:05:39.738468  449185 start.go:927] validating driver "docker" against &{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inacc
el:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.738617  449185 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:05:39.738715  449185 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:05:39.793928  449185 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:23 OomKillDisable:true NGoroutines:42 SystemTime:2025-12-12 21:05:39.784653162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:05:39.794497  449185 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:05:39.794535  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:39.794590  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:39.794655  449185 start.go:353] cluster config:
	{Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false isti
o-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:39.797771  449185 out.go:179] * Starting "ha-008703" primary control-plane node in "ha-008703" cluster
	I1212 21:05:39.800532  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:39.803460  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:39.806386  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:39.806435  449185 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 21:05:39.806449  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:39.806468  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:39.806557  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:39.806568  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:39.806736  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:39.826241  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:39.826266  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:39.826283  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:39.826317  449185 start.go:360] acquireMachinesLock for ha-008703: {Name:mk6e7d74f274e3ed345384f8b747c056bd141bf6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:39.826376  449185 start.go:364] duration metric: took 38.285µs to acquireMachinesLock for "ha-008703"
	I1212 21:05:39.826401  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:39.826407  449185 fix.go:54] fixHost starting: 
	I1212 21:05:39.826688  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:39.844490  449185 fix.go:112] recreateIfNeeded on ha-008703: state=Stopped err=<nil>
	W1212 21:05:39.844521  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:39.847711  449185 out.go:252] * Restarting existing docker container for "ha-008703" ...
	I1212 21:05:39.847788  449185 cli_runner.go:164] Run: docker start ha-008703
	I1212 21:05:40.139310  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:40.163240  449185 kic.go:430] container "ha-008703" state is running.
	I1212 21:05:40.163662  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:40.191201  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:40.191459  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:40.191534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:40.219354  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:40.219684  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:40.219693  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:40.220585  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:05:43.371942  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.371968  449185 ubuntu.go:182] provisioning hostname "ha-008703"
	I1212 21:05:43.372054  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.389586  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.389913  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.389930  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703 && echo "ha-008703" | sudo tee /etc/hostname
	I1212 21:05:43.553625  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703
	
	I1212 21:05:43.553711  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.571751  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:43.572079  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:43.572102  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:43.724831  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:43.724856  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:43.724884  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:43.724903  449185 provision.go:84] configureAuth start
	I1212 21:05:43.724977  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:43.743377  449185 provision.go:143] copyHostCerts
	I1212 21:05:43.743421  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743463  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:43.743471  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:43.743550  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:43.743646  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743662  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:43.743667  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:43.743692  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:43.743751  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743767  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:43.743771  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:43.743797  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:43.743859  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703 san=[127.0.0.1 192.168.49.2 ha-008703 localhost minikube]
	I1212 21:05:43.832472  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:43.832541  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:43.832590  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:43.850299  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:43.956285  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:43.956420  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:43.974303  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:43.974381  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1212 21:05:43.992649  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:43.992714  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:05:44.013810  449185 provision.go:87] duration metric: took 288.892734ms to configureAuth
	I1212 21:05:44.013838  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:44.014088  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:44.014212  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.036649  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:44.037017  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33202 <nil> <nil>}
	I1212 21:05:44.037041  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:44.386038  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:44.386060  449185 machine.go:97] duration metric: took 4.194590859s to provisionDockerMachine
	I1212 21:05:44.386072  449185 start.go:293] postStartSetup for "ha-008703" (driver="docker")
	I1212 21:05:44.386084  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:44.386193  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:44.386264  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.403386  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.508670  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:44.512195  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:44.512221  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:44.512236  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:44.512291  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:44.512398  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:44.512408  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:44.512511  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:44.520678  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:44.539590  449185 start.go:296] duration metric: took 153.501859ms for postStartSetup
	I1212 21:05:44.539670  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:44.539734  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.557736  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.661664  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:44.666383  449185 fix.go:56] duration metric: took 4.839968923s for fixHost
	I1212 21:05:44.666409  449185 start.go:83] releasing machines lock for "ha-008703", held for 4.840020362s
	I1212 21:05:44.666477  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 21:05:44.684762  449185 ssh_runner.go:195] Run: cat /version.json
	I1212 21:05:44.684817  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.685079  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:44.685134  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:44.708523  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.712753  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:44.904198  449185 ssh_runner.go:195] Run: systemctl --version
	I1212 21:05:44.910603  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:44.946561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:44.951022  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:44.951140  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:44.959060  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:44.959085  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:44.959118  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:44.959164  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:44.974739  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:44.987642  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:44.987758  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:45.005197  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:45.023356  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:45.187771  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:45.360312  449185 docker.go:234] disabling docker service ...
	I1212 21:05:45.360416  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:45.382556  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:45.397072  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:45.515232  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:45.630674  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:45.644319  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:45.659761  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:45.659839  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.669217  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:45.669329  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.678932  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.691100  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.701211  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:45.710201  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.720671  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.729634  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:45.739187  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:45.747460  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:45.755441  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:45.880049  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:46.064833  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:46.064907  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:46.068969  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:46.069037  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:46.072837  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:46.098607  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:46.098708  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.128236  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:46.158573  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:46.161391  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:46.178132  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:46.181932  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.192021  449185 kubeadm.go:884] updating cluster {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:
false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:05:46.192177  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:46.192251  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.227916  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.227942  449185 crio.go:433] Images already preloaded, skipping extraction
	I1212 21:05:46.227998  449185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:05:46.253605  449185 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:05:46.253629  449185 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:05:46.253638  449185 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.2 crio true true} ...
	I1212 21:05:46.253742  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:46.253823  449185 ssh_runner.go:195] Run: crio config
	I1212 21:05:46.327816  449185 cni.go:84] Creating CNI manager for ""
	I1212 21:05:46.327839  449185 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1212 21:05:46.327863  449185 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:05:46.327893  449185 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-008703 NodeName:ha-008703 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/mani
fests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:05:46.328051  449185 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-008703"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:05:46.328077  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:46.328142  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:46.341034  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:46.341215  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:46.341284  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:46.349457  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:46.349531  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1212 21:05:46.357340  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I1212 21:05:46.371153  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:46.384332  449185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2206 bytes)
	I1212 21:05:46.397565  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:46.411895  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:46.415692  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:46.426113  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:46.540637  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:46.557178  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.2
	I1212 21:05:46.557202  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:46.557219  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:46.557365  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:46.557420  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:46.557434  449185 certs.go:257] generating profile certs ...
	I1212 21:05:46.557525  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:46.557600  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.88c21904
	I1212 21:05:46.557649  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:46.557662  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:46.557674  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:46.557688  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:46.557703  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:46.557714  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:46.557731  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:46.557752  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:46.557770  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:46.557824  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:46.557861  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:46.557873  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:46.557901  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:46.557930  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:46.557955  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:46.558003  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:46.558037  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.558052  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.558066  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.558628  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:46.581904  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:46.602655  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:46.623772  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:46.644667  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:46.670849  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:46.690125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:46.719167  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:46.743203  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:46.764296  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:46.788880  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:46.807678  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:05:46.822196  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:46.829401  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.838655  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:46.847305  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851571  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.851686  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:46.894892  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:46.903217  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.911071  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:46.919222  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923110  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.923186  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:46.964916  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:46.972957  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.980730  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:46.989130  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993540  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:46.993610  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:47.036478  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:47.044309  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:47.048593  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:47.091048  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:47.132635  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:47.184472  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:47.233316  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:47.289483  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:47.363953  449185 kubeadm.go:401] StartCluster: {Name:ha-008703 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:05:47.364111  449185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:05:47.364177  449185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:05:47.424432  449185 cri.go:89] found id: "05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b"
	I1212 21:05:47.424457  449185 cri.go:89] found id: "6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f"
	I1212 21:05:47.424463  449185 cri.go:89] found id: "62a05b797d32258dc4368afc3978a5b3f463b4eafed6049189130af79138e299"
	I1212 21:05:47.424466  449185 cri.go:89] found id: "03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	I1212 21:05:47.424469  449185 cri.go:89] found id: "e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2"
	I1212 21:05:47.424473  449185 cri.go:89] found id: ""
	I1212 21:05:47.424525  449185 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 21:05:47.441549  449185 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:05:47Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:05:47.441640  449185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:05:47.453706  449185 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:05:47.453729  449185 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:05:47.453787  449185 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:05:47.466638  449185 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:47.467064  449185 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-008703" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.467171  449185 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "ha-008703" cluster setting kubeconfig missing "ha-008703" context setting]
	I1212 21:05:47.467570  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.468100  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil
)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:05:47.468627  449185 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:05:47.468649  449185 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:05:47.468655  449185 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:05:47.468661  449185 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:05:47.468665  449185 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:05:47.468983  449185 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:05:47.469097  449185 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1212 21:05:47.477581  449185 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.49.2
	I1212 21:05:47.477605  449185 kubeadm.go:602] duration metric: took 23.869575ms to restartPrimaryControlPlane
	I1212 21:05:47.477614  449185 kubeadm.go:403] duration metric: took 113.6735ms to StartCluster
	I1212 21:05:47.477631  449185 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.477689  449185 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:05:47.478278  449185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:47.478485  449185 start.go:234] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:47.478512  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:05:47.478526  449185 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:05:47.479081  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.484597  449185 out.go:179] * Enabled addons: 
	I1212 21:05:47.487542  449185 addons.go:530] duration metric: took 9.010305ms for enable addons: enabled=[]
	I1212 21:05:47.487605  449185 start.go:247] waiting for cluster config update ...
	I1212 21:05:47.487614  449185 start.go:256] writing updated cluster config ...
	I1212 21:05:47.491098  449185 out.go:203] 
	I1212 21:05:47.494772  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:47.494914  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.498660  449185 out.go:179] * Starting "ha-008703-m02" control-plane node in "ha-008703" cluster
	I1212 21:05:47.501545  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:05:47.504535  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:05:47.507691  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:05:47.507726  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:05:47.507835  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:05:47.507851  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:05:47.507972  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:47.508202  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:05:47.538497  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:05:47.538521  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:05:47.538535  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:05:47.538559  449185 start.go:360] acquireMachinesLock for ha-008703-m02: {Name:mk9bbd559a38ee71084b431688c18ccf671707a9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:05:47.538627  449185 start.go:364] duration metric: took 48.131µs to acquireMachinesLock for "ha-008703-m02"
	I1212 21:05:47.538652  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:05:47.538660  449185 fix.go:54] fixHost starting: m02
	I1212 21:05:47.538948  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:47.574023  449185 fix.go:112] recreateIfNeeded on ha-008703-m02: state=Stopped err=<nil>
	W1212 21:05:47.574053  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:05:47.577557  449185 out.go:252] * Restarting existing docker container for "ha-008703-m02" ...
	I1212 21:05:47.577655  449185 cli_runner.go:164] Run: docker start ha-008703-m02
	I1212 21:05:47.980330  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 21:05:48.008294  449185 kic.go:430] container "ha-008703-m02" state is running.
	I1212 21:05:48.008939  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:48.047188  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:05:48.047422  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:05:48.047478  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:48.078749  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:48.079063  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:48.079074  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:05:48.079845  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44600->127.0.0.1:33207: read: connection reset by peer
	I1212 21:05:51.328699  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.328723  449185 ubuntu.go:182] provisioning hostname "ha-008703-m02"
	I1212 21:05:51.328784  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.373011  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.373328  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.373339  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m02 && echo "ha-008703-m02" | sudo tee /etc/hostname
	I1212 21:05:51.672250  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m02
	
	I1212 21:05:51.672411  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:51.697392  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:51.697707  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:51.697724  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:05:51.885149  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:05:51.885219  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:05:51.885252  449185 ubuntu.go:190] setting up certificates
	I1212 21:05:51.885290  449185 provision.go:84] configureAuth start
	I1212 21:05:51.885368  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:51.907559  449185 provision.go:143] copyHostCerts
	I1212 21:05:51.907599  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907631  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:05:51.907638  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:05:51.907718  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:05:51.907797  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907814  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:05:51.907820  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:05:51.907846  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:05:51.907886  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907901  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:05:51.907905  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:05:51.907929  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:05:51.907973  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m02 san=[127.0.0.1 192.168.49.3 ha-008703-m02 localhost minikube]
	I1212 21:05:52.137179  449185 provision.go:177] copyRemoteCerts
	I1212 21:05:52.137300  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:05:52.137386  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.156094  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:52.288849  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:05:52.288913  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:05:52.342195  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:05:52.342258  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:05:52.393562  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:05:52.393620  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:05:52.445696  449185 provision.go:87] duration metric: took 560.374153ms to configureAuth
	I1212 21:05:52.445764  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:05:52.446027  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:52.446170  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:52.478675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:05:52.478980  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33207 <nil> <nil>}
	I1212 21:05:52.478993  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:05:53.000008  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:05:53.000110  449185 machine.go:97] duration metric: took 4.952677944s to provisionDockerMachine
	I1212 21:05:53.000138  449185 start.go:293] postStartSetup for "ha-008703-m02" (driver="docker")
	I1212 21:05:53.000177  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:05:53.000293  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:05:53.000358  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.020786  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.128335  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:05:53.131751  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:05:53.131783  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:05:53.131795  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:05:53.131855  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:05:53.131934  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:05:53.131947  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:05:53.132049  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:05:53.139844  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:53.158393  449185 start.go:296] duration metric: took 158.21332ms for postStartSetup
	I1212 21:05:53.158474  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:05:53.158534  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.176037  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.281959  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:05:53.287302  449185 fix.go:56] duration metric: took 5.74863443s for fixHost
	I1212 21:05:53.287331  449185 start.go:83] releasing machines lock for "ha-008703-m02", held for 5.748691916s
	I1212 21:05:53.287402  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m02
	I1212 21:05:53.307739  449185 out.go:179] * Found network options:
	I1212 21:05:53.310522  449185 out.go:179]   - NO_PROXY=192.168.49.2
	W1212 21:05:53.313363  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:05:53.313414  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:05:53.313489  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:05:53.313533  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.313574  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:05:53.313632  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m02
	I1212 21:05:53.336547  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.336799  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33207 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m02/id_rsa Username:docker}
	I1212 21:05:53.542870  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:05:53.567799  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:05:53.567925  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:05:53.589478  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:05:53.589553  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:05:53.589598  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:05:53.589671  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:05:53.609030  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:05:53.638599  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:05:53.638724  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:05:53.668742  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:05:53.694088  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:05:53.934693  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:05:54.164277  449185 docker.go:234] disabling docker service ...
	I1212 21:05:54.164417  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:05:54.185997  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:05:54.207462  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:05:54.437335  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:05:54.661473  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:05:54.679927  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:05:54.707742  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:05:54.707861  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.723319  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:05:54.723443  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.740396  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.751373  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.768858  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:05:54.780854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.795944  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.808854  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:05:54.818935  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:05:54.833159  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:05:54.849406  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:55.082636  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:05:55.362814  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:05:55.362938  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:05:55.366812  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:05:55.366918  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:05:55.370570  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:05:55.399084  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:05:55.399168  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.428944  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:05:55.460814  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:05:55.463826  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:05:55.466808  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:05:55.495103  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:05:55.503442  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:55.518854  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:05:55.519096  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:55.519362  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:05:55.545294  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:05:55.545592  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.3
	I1212 21:05:55.545608  449185 certs.go:195] generating shared ca certs ...
	I1212 21:05:55.545622  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:05:55.545735  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:05:55.545785  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:05:55.545796  449185 certs.go:257] generating profile certs ...
	I1212 21:05:55.545885  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:05:55.545952  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.b6a91b51
	I1212 21:05:55.546008  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:05:55.546022  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:05:55.546043  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:05:55.546059  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:05:55.546082  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:05:55.546098  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:05:55.546112  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:05:55.546126  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:05:55.546142  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:05:55.546197  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:05:55.546246  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:05:55.546262  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:05:55.546293  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:05:55.546320  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:05:55.546354  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:05:55.546415  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:05:55.546463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:55.546490  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:05:55.546515  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:05:55.546583  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:05:55.568767  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:05:55.668715  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:05:55.672576  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:05:55.680945  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:05:55.684500  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:05:55.693000  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:05:55.696718  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:05:55.704917  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:05:55.708459  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:05:55.717032  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:05:55.720547  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:05:55.728907  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:05:55.732537  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:05:55.740854  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:05:55.760026  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:05:55.778517  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:05:55.797624  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:05:55.817142  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:05:55.835385  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:05:55.853338  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:05:55.872093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:05:55.890019  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:05:55.908331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:05:55.926030  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:05:55.944002  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:05:55.956838  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:05:55.969593  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:05:55.982132  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:05:55.995578  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:05:56.013190  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:05:56.026969  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:05:56.040988  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:05:56.047942  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.056004  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:05:56.064163  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068273  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.068362  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:05:56.109836  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:05:56.118260  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.126352  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:05:56.134010  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137848  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.137914  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:05:56.179470  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:05:56.187587  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.195301  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:05:56.203258  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207359  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.207467  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:05:56.248706  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:05:56.256310  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:05:56.260190  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:05:56.306385  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:05:56.347361  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:05:56.389865  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:05:56.430835  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:05:56.472973  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:05:56.521282  449185 kubeadm.go:935] updating node {m02 192.168.49.3 8443 v1.34.2 crio true true} ...
	I1212 21:05:56.521453  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:05:56.521498  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:05:56.521575  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:05:56.534831  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:05:56.534951  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:05:56.535047  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:05:56.543116  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:05:56.543223  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:05:56.551463  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:05:56.566227  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:05:56.579329  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:05:56.592969  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:05:56.596983  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:05:56.607297  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.744346  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.759793  449185 start.go:236] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:05:56.760120  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:05:56.766599  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:05:56.769234  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:05:56.908410  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:05:56.923082  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:05:56.923202  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:05:56.923464  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m02" to be "Ready" ...
	W1212 21:06:06.924664  449185 node_ready.go:55] error getting node "ha-008703-m02" condition "Ready" status (will retry): Get "https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02": net/http: TLS handshake timeout
	I1212 21:06:10.340284  449185 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://192.168.49.2:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:06:20.254665  449185 node_ready.go:49] node "ha-008703-m02" is "Ready"
	I1212 21:06:20.254694  449185 node_ready.go:38] duration metric: took 23.33118731s for node "ha-008703-m02" to be "Ready" ...
	I1212 21:06:20.254707  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:20.254768  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:20.278828  449185 api_server.go:72] duration metric: took 23.518673135s to wait for apiserver process to appear ...
	I1212 21:06:20.278854  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:20.278876  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.361760  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:06:20.361785  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:06:20.779312  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:20.809650  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:20.809728  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.279043  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.326274  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.326348  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:21.779606  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:21.811129  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:21.811210  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.279504  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.299466  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.299549  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:22.779116  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:22.797946  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:22.798028  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.279662  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.308514  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.308642  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:23.779220  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:23.800333  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:06:23.800429  449185 api_server.go:103] status: https://192.168.49.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:06:24.278995  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:24.291485  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:24.307186  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:24.307278  449185 api_server.go:131] duration metric: took 4.028399738s to wait for apiserver health ...
	I1212 21:06:24.307306  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:24.326207  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:24.326317  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326341  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.326383  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.326404  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.326425  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.326458  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.326482  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.326502  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.326524  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.326559  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326604  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.326624  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.326647  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326684  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.326711  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.326732  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.326752  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.326770  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.326797  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.326828  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.326851  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.326870  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.326900  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.326923  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.326944  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.326964  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.326987  449185 system_pods.go:74] duration metric: took 19.648646ms to wait for pod list to return data ...
	I1212 21:06:24.327025  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:24.345476  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:24.345542  449185 default_sa.go:55] duration metric: took 18.497613ms for default service account to be created ...
	I1212 21:06:24.345567  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:24.441449  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:24.441494  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441509  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:24.441517  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:24.441529  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:06:24.441537  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:24.441542  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:24.441549  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:24.441553  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:24.441557  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:24.441564  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441576  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:06:24.441580  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:24.441592  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441601  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:06:24.441606  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:24.441612  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:24.441616  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:24.441620  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:24.441627  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:24.441631  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:24.441646  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:06:24.441650  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:24.441654  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:24.441665  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:24.441671  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:24.441675  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running
	I1212 21:06:24.441684  449185 system_pods.go:126] duration metric: took 96.098139ms to wait for k8s-apps to be running ...
	I1212 21:06:24.441697  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:24.441755  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:24.458749  449185 system_svc.go:56] duration metric: took 17.042535ms WaitForService to wait for kubelet
	I1212 21:06:24.458826  449185 kubeadm.go:587] duration metric: took 27.69867432s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:24.458863  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:24.463250  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463295  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463308  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463313  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463317  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463322  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463325  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:24.463330  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:24.463334  449185 node_conditions.go:105] duration metric: took 4.443929ms to run NodePressure ...
	I1212 21:06:24.463360  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:24.463389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:24.467450  449185 out.go:203] 
	I1212 21:06:24.471714  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:24.471840  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.475478  449185 out.go:179] * Starting "ha-008703-m03" control-plane node in "ha-008703" cluster
	I1212 21:06:24.479357  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:24.482576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:24.485573  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:24.485605  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:24.485687  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:24.485718  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:24.485736  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:24.485861  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.512091  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:24.512112  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:24.512126  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:24.512153  449185 start.go:360] acquireMachinesLock for ha-008703-m03: {Name:mkc4792dc097e09b497b46fff7452c5b0b6f70aa Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:24.512210  449185 start.go:364] duration metric: took 41.255µs to acquireMachinesLock for "ha-008703-m03"
	I1212 21:06:24.512230  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:24.512237  449185 fix.go:54] fixHost starting: m03
	I1212 21:06:24.512562  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.547705  449185 fix.go:112] recreateIfNeeded on ha-008703-m03: state=Stopped err=<nil>
	W1212 21:06:24.547736  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:24.551016  449185 out.go:252] * Restarting existing docker container for "ha-008703-m03" ...
	I1212 21:06:24.551124  449185 cli_runner.go:164] Run: docker start ha-008703-m03
	I1212 21:06:24.918317  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 21:06:24.943282  449185 kic.go:430] container "ha-008703-m03" state is running.
	I1212 21:06:24.944655  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:24.976163  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:24.976462  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:24.976536  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:25.007740  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:25.008073  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:25.008082  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:25.008934  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45896->127.0.0.1:33212: read: connection reset by peer
	I1212 21:06:28.195900  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.195925  449185 ubuntu.go:182] provisioning hostname "ha-008703-m03"
	I1212 21:06:28.195992  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.238514  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.238834  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.238851  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m03 && echo "ha-008703-m03" | sudo tee /etc/hostname
	I1212 21:06:28.479384  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m03
	
	I1212 21:06:28.479480  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:28.507106  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:28.507416  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:28.507437  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:28.751314  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:28.751390  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:28.751429  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:28.751469  449185 provision.go:84] configureAuth start
	I1212 21:06:28.751595  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:28.780423  449185 provision.go:143] copyHostCerts
	I1212 21:06:28.780473  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780506  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:28.780519  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:28.780599  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:28.780687  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780712  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:28.780720  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:28.780749  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:28.780795  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780816  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:28.780823  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:28.780848  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:28.780902  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m03 san=[127.0.0.1 192.168.49.4 ha-008703-m03 localhost minikube]
	I1212 21:06:29.132570  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:29.132679  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:29.132752  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.161077  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:29.290001  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:29.290063  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:29.326015  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:29.326077  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:29.373017  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:29.373102  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:06:29.430671  449185 provision.go:87] duration metric: took 679.168963ms to configureAuth
	I1212 21:06:29.430700  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:29.430943  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:29.431050  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:29.464440  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:29.464756  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33212 <nil> <nil>}
	I1212 21:06:29.464775  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:30.522791  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:30.522817  449185 machine.go:97] duration metric: took 5.546337341s to provisionDockerMachine
	I1212 21:06:30.522830  449185 start.go:293] postStartSetup for "ha-008703-m03" (driver="docker")
	I1212 21:06:30.522841  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:30.522923  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:30.522969  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.541196  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.648836  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:30.652559  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:30.652598  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:30.652624  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:30.652708  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:30.652823  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:30.652833  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:30.652939  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:30.661331  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:30.687281  449185 start.go:296] duration metric: took 164.433925ms for postStartSetup
	I1212 21:06:30.687373  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:30.687421  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.713364  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.821971  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:30.827033  449185 fix.go:56] duration metric: took 6.314788872s for fixHost
	I1212 21:06:30.827061  449185 start.go:83] releasing machines lock for "ha-008703-m03", held for 6.314842198s
	I1212 21:06:30.827140  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 21:06:30.847749  449185 out.go:179] * Found network options:
	I1212 21:06:30.850465  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W1212 21:06:30.853486  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853520  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853545  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:30.853558  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:30.853630  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:30.853672  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.853950  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:30.854006  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 21:06:30.875211  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:30.901708  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33212 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 21:06:31.084053  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:31.089338  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:31.089442  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:31.098288  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:31.098362  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:31.098418  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:31.098504  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:31.115825  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:31.132457  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:31.132578  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:31.150352  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:31.166465  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:06:31.301826  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:06:31.519838  449185 docker.go:234] disabling docker service ...
	I1212 21:06:31.519963  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:06:31.552895  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:06:31.586883  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:06:31.921487  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:06:32.171189  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:06:32.196225  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:06:32.218996  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:06:32.219066  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.231170  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:06:32.231254  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.264701  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.278943  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.293177  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:06:32.313973  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.323884  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.333399  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:06:32.345640  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:06:32.354606  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:06:32.378038  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:32.601691  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:06:32.867254  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:06:32.867377  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:06:32.871734  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:06:32.871807  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:06:32.875400  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:06:32.900774  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:06:32.900910  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.930896  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:06:32.972077  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:06:32.974985  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:06:32.977916  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:06:32.980878  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:06:32.998829  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:06:33.008314  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:33.019604  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:06:33.019853  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:33.020130  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:06:33.050582  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:06:33.050909  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.4
	I1212 21:06:33.050924  449185 certs.go:195] generating shared ca certs ...
	I1212 21:06:33.050954  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:06:33.051090  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:06:33.051141  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:06:33.051152  449185 certs.go:257] generating profile certs ...
	I1212 21:06:33.051239  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key
	I1212 21:06:33.051314  449185 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key.77152b1c
	I1212 21:06:33.051365  449185 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key
	I1212 21:06:33.051374  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:06:33.051387  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:06:33.051401  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:06:33.051418  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:06:33.051430  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1212 21:06:33.051446  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1212 21:06:33.051463  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1212 21:06:33.051479  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1212 21:06:33.051535  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:06:33.051571  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:06:33.051584  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:06:33.051615  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:06:33.051643  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:06:33.051671  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:06:33.051721  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:33.051757  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.051774  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.051785  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.051851  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 21:06:33.071355  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33202 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 21:06:33.180711  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1212 21:06:33.184847  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1212 21:06:33.194292  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1212 21:06:33.198466  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1212 21:06:33.207132  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1212 21:06:33.210762  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1212 21:06:33.219366  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1212 21:06:33.222902  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I1212 21:06:33.231254  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1212 21:06:33.235252  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1212 21:06:33.245320  449185 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1212 21:06:33.249647  449185 ssh_runner.go:448] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1212 21:06:33.259234  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:06:33.282501  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:06:33.308249  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:06:33.330512  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:06:33.350745  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1212 21:06:33.371841  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:06:33.392489  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:06:33.415260  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:06:33.435093  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:06:33.455125  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:06:33.475775  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:06:33.503119  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1212 21:06:33.519902  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1212 21:06:33.541097  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1212 21:06:33.558546  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I1212 21:06:33.580936  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1212 21:06:33.604112  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1212 21:06:33.628438  449185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1212 21:06:33.645138  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:06:33.653214  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.661760  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:06:33.672498  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677561  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.677637  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:06:33.725658  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:06:33.734300  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.742147  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:06:33.750364  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754312  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.754435  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:06:33.795883  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:06:33.803561  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.811944  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:06:33.819768  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823821  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.823917  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:06:33.869341  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:06:33.877525  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:06:33.881524  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:06:33.923421  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:06:33.965151  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:06:34.007958  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:06:34.056315  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:06:34.099324  449185 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:06:34.142509  449185 kubeadm.go:935] updating node {m03 192.168.49.4 8443 v1.34.2 crio true true} ...
	I1212 21:06:34.142710  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:06:34.142750  449185 kube-vip.go:115] generating kube-vip config ...
	I1212 21:06:34.142821  449185 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I1212 21:06:34.155586  449185 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:06:34.155655  449185 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.2
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1212 21:06:34.155735  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:06:34.164504  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:06:34.164593  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1212 21:06:34.172960  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:06:34.187238  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:06:34.202155  449185 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I1212 21:06:34.217531  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:06:34.221916  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:06:34.232222  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.409764  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.425465  449185 start.go:236] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:06:34.426019  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:34.429018  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:06:34.431984  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:06:34.608481  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:06:34.623603  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:06:34.623719  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:06:34.623971  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627483  449185 node_ready.go:49] node "ha-008703-m03" is "Ready"
	I1212 21:06:34.627510  449185 node_ready.go:38] duration metric: took 3.502711ms for node "ha-008703-m03" to be "Ready" ...
	I1212 21:06:34.627524  449185 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:06:34.627583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.127774  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:35.627665  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.128468  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:36.628211  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.128314  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:37.627991  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.127766  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:38.627868  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.128698  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:39.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.128648  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:40.627740  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.128354  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:41.628245  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.130632  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:42.627827  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.128583  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:43.627968  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.128136  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:44.628605  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.128568  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:45.627727  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.128033  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:46.627763  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.128250  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:47.628035  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.127920  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:48.628389  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.127872  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:49.628485  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.127813  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:50.627737  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.128714  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:51.628186  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.128495  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:52.627734  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.128077  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.628172  449185 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:06:53.643287  449185 api_server.go:72] duration metric: took 19.217761741s to wait for apiserver process to appear ...
	I1212 21:06:53.643310  449185 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:06:53.643330  449185 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1212 21:06:53.653231  449185 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1212 21:06:53.654408  449185 api_server.go:141] control plane version: v1.34.2
	I1212 21:06:53.654429  449185 api_server.go:131] duration metric: took 11.111371ms to wait for apiserver health ...
	I1212 21:06:53.654438  449185 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:06:53.664181  449185 system_pods.go:59] 26 kube-system pods found
	I1212 21:06:53.664268  449185 system_pods.go:61] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664292  449185 system_pods.go:61] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.664326  449185 system_pods.go:61] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.664350  449185 system_pods.go:61] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.664399  449185 system_pods.go:61] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.664423  449185 system_pods.go:61] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.664447  449185 system_pods.go:61] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.664476  449185 system_pods.go:61] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.664511  449185 system_pods.go:61] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.664543  449185 system_pods.go:61] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.664562  449185 system_pods.go:61] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.664586  449185 system_pods.go:61] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.664617  449185 system_pods.go:61] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.664639  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.664655  449185 system_pods.go:61] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.664672  449185 system_pods.go:61] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.664692  449185 system_pods.go:61] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.664722  449185 system_pods.go:61] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.664747  449185 system_pods.go:61] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.664767  449185 system_pods.go:61] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.664786  449185 system_pods.go:61] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.664806  449185 system_pods.go:61] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.664833  449185 system_pods.go:61] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.664856  449185 system_pods.go:61] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.664876  449185 system_pods.go:61] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.664898  449185 system_pods.go:61] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.664934  449185 system_pods.go:74] duration metric: took 10.478512ms to wait for pod list to return data ...
	I1212 21:06:53.664963  449185 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:06:53.672021  449185 default_sa.go:45] found service account: "default"
	I1212 21:06:53.672087  449185 default_sa.go:55] duration metric: took 7.103458ms for default service account to be created ...
	I1212 21:06:53.672114  449185 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:06:53.683734  449185 system_pods.go:86] 26 kube-system pods found
	I1212 21:06:53.683818  449185 system_pods.go:89] "coredns-66bc5c9577-8tvqx" [e856bce0-421c-4566-99a5-10cce65bc2c6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683843  449185 system_pods.go:89] "coredns-66bc5c9577-kls2t" [05ee9c80-f827-4e11-85b4-692d388723d7] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:06:53.683876  449185 system_pods.go:89] "etcd-ha-008703" [c9eebe8e-e713-4219-a216-cbb925ba1bae] Running
	I1212 21:06:53.683898  449185 system_pods.go:89] "etcd-ha-008703-m02" [c7d7f891-74ad-4734-b649-f0d51a9f610d] Running
	I1212 21:06:53.683916  449185 system_pods.go:89] "etcd-ha-008703-m03" [e4ac9555-5a86-4ba9-bd03-078a3e3415b6] Running
	I1212 21:06:53.683935  449185 system_pods.go:89] "kindnet-6dvv4" [2083888c-1707-45bb-84fb-01485196046c] Running
	I1212 21:06:53.683958  449185 system_pods.go:89] "kindnet-blbfb" [7268742e-8aae-4b7d-b2a0-5efafa137779] Running
	I1212 21:06:53.683985  449185 system_pods.go:89] "kindnet-f7h24" [d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec] Running
	I1212 21:06:53.684009  449185 system_pods.go:89] "kindnet-fwsws" [afcea849-421d-4500-bc0f-5db3ed74b0ea] Running
	I1212 21:06:53.684028  449185 system_pods.go:89] "kube-apiserver-ha-008703" [f958c91d-c438-4d78-9aa3-63aebeb8c5ee] Running
	I1212 21:06:53.684048  449185 system_pods.go:89] "kube-apiserver-ha-008703-m02" [0e95fa68-0b6a-483a-9168-1c521cc74985] Running
	I1212 21:06:53.684069  449185 system_pods.go:89] "kube-apiserver-ha-008703-m03" [77e62d65-4609-43cc-9b0f-5e002a34d764] Running
	I1212 21:06:53.684096  449185 system_pods.go:89] "kube-controller-manager-ha-008703" [1f668bbc-200d-418b-9526-311e6f6cd056] Running
	I1212 21:06:53.684121  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m02" [423bd095-6bb3-41fa-a9d8-bf0181829066] Running
	I1212 21:06:53.684144  449185 system_pods.go:89] "kube-controller-manager-ha-008703-m03" [88a095e7-62fe-408c-9096-e6f0692696c1] Running
	I1212 21:06:53.684165  449185 system_pods.go:89] "kube-proxy-26llr" [c4449c07-f802-4ef4-8fca-c841a2759710] Running
	I1212 21:06:53.684195  449185 system_pods.go:89] "kube-proxy-5cjcj" [610a37c5-d704-413d-9121-db265c5dff1c] Running
	I1212 21:06:53.684216  449185 system_pods.go:89] "kube-proxy-tgx5j" [ee2850f7-5474-48e9-b8dc-f9e14292127e] Running
	I1212 21:06:53.684234  449185 system_pods.go:89] "kube-proxy-v8lm4" [9527dee4-3047-48fd-86fe-93d833167071] Running
	I1212 21:06:53.684254  449185 system_pods.go:89] "kube-scheduler-ha-008703" [f3fb4c30-e347-409d-bfa5-7992c98e6c4d] Running
	I1212 21:06:53.684274  449185 system_pods.go:89] "kube-scheduler-ha-008703-m02" [437d98b4-f43b-4e29-b71f-07c5d601fc1d] Running
	I1212 21:06:53.684305  449185 system_pods.go:89] "kube-scheduler-ha-008703-m03" [d35fda73-08b8-4b02-a220-f384899cd335] Running
	I1212 21:06:53.684334  449185 system_pods.go:89] "kube-vip-ha-008703" [d6cc390d-08be-4bf2-8f2f-11ebe042464d] Running
	I1212 21:06:53.684356  449185 system_pods.go:89] "kube-vip-ha-008703-m02" [9cb7ec0e-cb25-4294-9e33-a4d66155c8a9] Running
	I1212 21:06:53.684505  449185 system_pods.go:89] "kube-vip-ha-008703-m03" [1a4ca0a1-9bd0-48ac-a2e1-a91d65180cc9] Running
	I1212 21:06:53.684532  449185 system_pods.go:89] "storage-provisioner" [2d57f23f-4461-4d86-b91f-e2628d8874ab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1212 21:06:53.684555  449185 system_pods.go:126] duration metric: took 12.421784ms to wait for k8s-apps to be running ...
	I1212 21:06:53.684581  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:06:53.684664  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:06:53.707726  449185 system_svc.go:56] duration metric: took 23.13631ms WaitForService to wait for kubelet
	I1212 21:06:53.707794  449185 kubeadm.go:587] duration metric: took 19.282272877s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:06:53.707828  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:06:53.713066  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713138  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713167  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713189  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713224  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713251  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713272  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:06:53.713294  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:06:53.713315  449185 node_conditions.go:105] duration metric: took 5.4683ms to run NodePressure ...
	I1212 21:06:53.713355  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:06:53.713389  449185 start.go:256] writing updated cluster config ...
	I1212 21:06:53.716967  449185 out.go:203] 
	I1212 21:06:53.720156  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:53.720328  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.723670  449185 out.go:179] * Starting "ha-008703-m04" worker node in "ha-008703" cluster
	I1212 21:06:53.726637  449185 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:06:53.729576  449185 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:06:53.732517  449185 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:06:53.732614  449185 cache.go:65] Caching tarball of preloaded images
	I1212 21:06:53.732589  449185 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:06:53.732947  449185 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:06:53.732979  449185 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:06:53.733130  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:53.769116  449185 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:06:53.769147  449185 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:06:53.769168  449185 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:06:53.769196  449185 start.go:360] acquireMachinesLock for ha-008703-m04: {Name:mk62cc2a2cc2e6d3b3f47556aaddea9ef719055b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:06:53.769254  449185 start.go:364] duration metric: took 38.549µs to acquireMachinesLock for "ha-008703-m04"
	I1212 21:06:53.769277  449185 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:06:53.769289  449185 fix.go:54] fixHost starting: m04
	I1212 21:06:53.769545  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:53.786769  449185 fix.go:112] recreateIfNeeded on ha-008703-m04: state=Stopped err=<nil>
	W1212 21:06:53.786801  449185 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:06:53.789926  449185 out.go:252] * Restarting existing docker container for "ha-008703-m04" ...
	I1212 21:06:53.790089  449185 cli_runner.go:164] Run: docker start ha-008703-m04
	I1212 21:06:54.156965  449185 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 21:06:54.178693  449185 kic.go:430] container "ha-008703-m04" state is running.
	I1212 21:06:54.179092  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:54.203905  449185 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/config.json ...
	I1212 21:06:54.204146  449185 machine.go:94] provisionDockerMachine start ...
	I1212 21:06:54.204209  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:54.236695  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:54.237065  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:54.237081  449185 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:06:54.237686  449185 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:06:57.432360  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.432405  449185 ubuntu.go:182] provisioning hostname "ha-008703-m04"
	I1212 21:06:57.432471  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.466545  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.466905  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.466917  449185 main.go:143] libmachine: About to run SSH command:
	sudo hostname ha-008703-m04 && echo "ha-008703-m04" | sudo tee /etc/hostname
	I1212 21:06:57.695949  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: ha-008703-m04
	
	I1212 21:06:57.696057  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:57.725675  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:57.725993  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:57.726015  449185 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-008703-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-008703-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-008703-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:06:57.922048  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:06:57.922076  449185 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:06:57.922097  449185 ubuntu.go:190] setting up certificates
	I1212 21:06:57.922108  449185 provision.go:84] configureAuth start
	I1212 21:06:57.922191  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:57.949300  449185 provision.go:143] copyHostCerts
	I1212 21:06:57.949346  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949379  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:06:57.949390  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:06:57.949467  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:06:57.949557  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949579  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:06:57.949590  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:06:57.949619  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:06:57.949669  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949692  449185 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:06:57.949702  449185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:06:57.949735  449185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:06:57.949797  449185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.ha-008703-m04 san=[127.0.0.1 192.168.49.5 ha-008703-m04 localhost minikube]
	I1212 21:06:58.253055  449185 provision.go:177] copyRemoteCerts
	I1212 21:06:58.253130  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:06:58.253185  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.272770  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:58.384265  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:06:58.384326  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:06:58.432775  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:06:58.432846  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1212 21:06:58.468705  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:06:58.468769  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:06:58.498893  449185 provision.go:87] duration metric: took 576.767506ms to configureAuth
	I1212 21:06:58.498961  449185 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:06:58.499231  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:06:58.499373  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:58.531077  449185 main.go:143] libmachine: Using SSH client type: native
	I1212 21:06:58.531395  449185 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33217 <nil> <nil>}
	I1212 21:06:58.531411  449185 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:06:59.036280  449185 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:06:59.036310  449185 machine.go:97] duration metric: took 4.83214688s to provisionDockerMachine
	I1212 21:06:59.036331  449185 start.go:293] postStartSetup for "ha-008703-m04" (driver="docker")
	I1212 21:06:59.036343  449185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:06:59.036466  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:06:59.036523  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.086256  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.217706  449185 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:06:59.225272  449185 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:06:59.225304  449185 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:06:59.225326  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:06:59.225398  449185 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:06:59.225489  449185 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:06:59.225502  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:06:59.225626  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:06:59.239694  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:06:59.289259  449185 start.go:296] duration metric: took 252.894748ms for postStartSetup
	I1212 21:06:59.289353  449185 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:06:59.289435  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.318501  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.433235  449185 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:06:59.440975  449185 fix.go:56] duration metric: took 5.671680345s for fixHost
	I1212 21:06:59.441000  449185 start.go:83] releasing machines lock for "ha-008703-m04", held for 5.671734343s
	I1212 21:06:59.441074  449185 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 21:06:59.473221  449185 out.go:179] * Found network options:
	I1212 21:06:59.477821  449185 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	W1212 21:06:59.480861  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480899  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480912  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480936  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480956  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	W1212 21:06:59.480968  449185 proxy.go:120] fail to check proxy env: Error ip not in block
	I1212 21:06:59.481044  449185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:06:59.481089  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.481371  449185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:06:59.481425  449185 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 21:06:59.521656  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.528821  449185 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33217 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 21:06:59.865561  449185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:06:59.874595  449185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:06:59.874667  449185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:06:59.887303  449185 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:06:59.887378  449185 start.go:496] detecting cgroup driver to use...
	I1212 21:06:59.887427  449185 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:06:59.887500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:06:59.908986  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:06:59.940196  449185 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:06:59.940301  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:06:59.959663  449185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:06:59.976282  449185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:07:00.307427  449185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:07:00.569417  449185 docker.go:234] disabling docker service ...
	I1212 21:07:00.569500  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:07:00.607031  449185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:07:00.633272  449185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:07:00.844907  449185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:07:01.084528  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:07:01.108001  449185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:07:01.130446  449185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:07:01.130569  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.145280  449185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:07:01.145425  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.165912  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.178770  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.192394  449185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:07:01.203182  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.214233  449185 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.224343  449185 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:07:01.236075  449185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:07:01.246300  449185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:07:01.256331  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:01.516203  449185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:07:01.766997  449185 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:07:01.767119  449185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:07:01.776270  449185 start.go:564] Will wait 60s for crictl version
	I1212 21:07:01.776437  449185 ssh_runner.go:195] Run: which crictl
	I1212 21:07:01.784745  449185 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:07:01.824822  449185 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:07:01.824977  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.889046  449185 ssh_runner.go:195] Run: crio --version
	I1212 21:07:01.956065  449185 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:07:01.959062  449185 out.go:179]   - env NO_PROXY=192.168.49.2
	I1212 21:07:01.962079  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I1212 21:07:01.964978  449185 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3,192.168.49.4
	I1212 21:07:01.967779  449185 cli_runner.go:164] Run: docker network inspect ha-008703 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:07:01.996732  449185 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1212 21:07:02.001678  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.020405  449185 mustload.go:66] Loading cluster: ha-008703
	I1212 21:07:02.020654  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:02.020930  449185 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 21:07:02.039611  449185 host.go:66] Checking if "ha-008703" exists ...
	I1212 21:07:02.039893  449185 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703 for IP: 192.168.49.5
	I1212 21:07:02.039901  449185 certs.go:195] generating shared ca certs ...
	I1212 21:07:02.039915  449185 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:07:02.040028  449185 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:07:02.040067  449185 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:07:02.040078  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1212 21:07:02.040092  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1212 21:07:02.040104  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1212 21:07:02.040116  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1212 21:07:02.040169  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:07:02.040202  449185 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:07:02.040210  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:07:02.040237  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:07:02.040261  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:07:02.040288  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:07:02.040334  449185 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:07:02.040380  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem -> /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.040396  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.040407  449185 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.040424  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:07:02.066397  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:07:02.105376  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:07:02.137944  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:07:02.170023  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:07:02.210932  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:07:02.238540  449185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:07:02.269874  449185 ssh_runner.go:195] Run: openssl version
	I1212 21:07:02.281063  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.291218  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:07:02.301041  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308712  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.308786  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:07:02.368311  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:07:02.378631  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.387217  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:07:02.398975  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403766  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.403869  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:07:02.470421  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:07:02.480522  449185 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.493373  449185 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:07:02.510638  449185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516014  449185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.516150  449185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:07:02.591218  449185 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:07:02.600904  449185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:07:02.619811  449185 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1212 21:07:02.619887  449185 kubeadm.go:935] updating node {m04 192.168.49.5 0 v1.34.2 crio false true} ...
	I1212 21:07:02.619990  449185 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-008703-m04 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:ha-008703 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:07:02.620088  449185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:07:02.636422  449185 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:07:02.636540  449185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1212 21:07:02.650400  449185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1212 21:07:02.684861  449185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:07:02.708803  449185 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I1212 21:07:02.713707  449185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:07:02.731184  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.010394  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.061651  449185 start.go:236] Will wait 6m0s for node &{Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:false Worker:true}
	I1212 21:07:03.062018  449185 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:07:03.067183  449185 out.go:179] * Verifying Kubernetes components...
	I1212 21:07:03.070801  449185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:07:03.406466  449185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:07:03.471431  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1212 21:07:03.471508  449185 kubeadm.go:492] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I1212 21:07:03.471736  449185 node_ready.go:35] waiting up to 6m0s for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505163  449185 node_ready.go:49] node "ha-008703-m04" is "Ready"
	I1212 21:07:03.505194  449185 node_ready.go:38] duration metric: took 33.438197ms for node "ha-008703-m04" to be "Ready" ...
	I1212 21:07:03.505209  449185 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:07:03.505266  449185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:07:03.526122  449185 system_svc.go:56] duration metric: took 20.904535ms WaitForService to wait for kubelet
	I1212 21:07:03.526155  449185 kubeadm.go:587] duration metric: took 464.111537ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:07:03.526175  449185 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:07:03.582671  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582703  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582714  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582719  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582723  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582727  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582731  449185 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:07:03.582735  449185 node_conditions.go:123] node cpu capacity is 2
	I1212 21:07:03.582741  449185 node_conditions.go:105] duration metric: took 56.560779ms to run NodePressure ...
	I1212 21:07:03.582752  449185 start.go:242] waiting for startup goroutines ...
	I1212 21:07:03.582774  449185 start.go:256] writing updated cluster config ...
	I1212 21:07:03.583086  449185 ssh_runner.go:195] Run: rm -f paused
	I1212 21:07:03.601326  449185 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:03.602059  449185 kapi.go:59] client config for ha-008703: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/ha-008703/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:07:03.627964  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640449  449185 pod_ready.go:94] pod "coredns-66bc5c9577-8tvqx" is "Ready"
	I1212 21:07:03.640525  449185 pod_ready.go:86] duration metric: took 12.481008ms for pod "coredns-66bc5c9577-8tvqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.640551  449185 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.647941  449185 pod_ready.go:94] pod "coredns-66bc5c9577-kls2t" is "Ready"
	I1212 21:07:03.648021  449185 pod_ready.go:86] duration metric: took 7.447403ms for pod "coredns-66bc5c9577-kls2t" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.734522  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742549  449185 pod_ready.go:94] pod "etcd-ha-008703" is "Ready"
	I1212 21:07:03.742645  449185 pod_ready.go:86] duration metric: took 8.036611ms for pod "etcd-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.742670  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751107  449185 pod_ready.go:94] pod "etcd-ha-008703-m02" is "Ready"
	I1212 21:07:03.751180  449185 pod_ready.go:86] duration metric: took 8.490203ms for pod "etcd-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.751203  449185 pod_ready.go:83] waiting for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:03.802884  449185 request.go:683] "Waited before sending request" delay="51.579039ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-008703-m03"
	I1212 21:07:04.003143  449185 request.go:683] "Waited before sending request" delay="191.298042ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:04.008011  449185 pod_ready.go:94] pod "etcd-ha-008703-m03" is "Ready"
	I1212 21:07:04.008105  449185 pod_ready.go:86] duration metric: took 256.8794ms for pod "etcd-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.203542  449185 request.go:683] "Waited before sending request" delay="195.301148ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I1212 21:07:04.208571  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.402858  449185 request.go:683] "Waited before sending request" delay="194.13984ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703"
	I1212 21:07:04.603054  449185 request.go:683] "Waited before sending request" delay="196.30777ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:04.607366  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703" is "Ready"
	I1212 21:07:04.607392  449185 pod_ready.go:86] duration metric: took 398.743662ms for pod "kube-apiserver-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.607403  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:04.802681  449185 request.go:683] "Waited before sending request" delay="195.203703ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m02"
	I1212 21:07:05.004599  449185 request.go:683] "Waited before sending request" delay="198.050663ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:05.009883  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m02" is "Ready"
	I1212 21:07:05.009916  449185 pod_ready.go:86] duration metric: took 402.505715ms for pod "kube-apiserver-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.009927  449185 pod_ready.go:83] waiting for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.203348  449185 request.go:683] "Waited before sending request" delay="193.318894ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-008703-m03"
	I1212 21:07:05.402598  449185 request.go:683] "Waited before sending request" delay="195.266325ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:05.407026  449185 pod_ready.go:94] pod "kube-apiserver-ha-008703-m03" is "Ready"
	I1212 21:07:05.407054  449185 pod_ready.go:86] duration metric: took 397.119016ms for pod "kube-apiserver-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.603514  449185 request.go:683] "Waited before sending request" delay="196.332041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I1212 21:07:05.609335  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:05.802598  449185 request.go:683] "Waited before sending request" delay="193.136821ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703"
	I1212 21:07:06.002969  449185 request.go:683] "Waited before sending request" delay="196.400711ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:06.009868  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703" is "Ready"
	I1212 21:07:06.009898  449185 pod_ready.go:86] duration metric: took 400.534916ms for pod "kube-controller-manager-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.009910  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.203284  449185 request.go:683] "Waited before sending request" delay="193.288724ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m02"
	I1212 21:07:06.403087  449185 request.go:683] "Waited before sending request" delay="195.335069ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:06.406992  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m02" is "Ready"
	I1212 21:07:06.407024  449185 pod_ready.go:86] duration metric: took 397.103754ms for pod "kube-controller-manager-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.407035  449185 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:06.603444  449185 request.go:683] "Waited before sending request" delay="196.318585ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-008703-m03"
	I1212 21:07:06.803243  449185 request.go:683] "Waited before sending request" delay="196.311315ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:06.811152  449185 pod_ready.go:94] pod "kube-controller-manager-ha-008703-m03" is "Ready"
	I1212 21:07:06.811182  449185 pod_ready.go:86] duration metric: took 404.13997ms for pod "kube-controller-manager-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.003659  449185 request.go:683] "Waited before sending request" delay="192.369133ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I1212 21:07:07.008682  449185 pod_ready.go:83] waiting for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.203112  449185 request.go:683] "Waited before sending request" delay="194.317566ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-26llr"
	I1212 21:07:07.403112  449185 request.go:683] "Waited before sending request" delay="196.188213ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m04"
	I1212 21:07:07.406710  449185 pod_ready.go:94] pod "kube-proxy-26llr" is "Ready"
	I1212 21:07:07.406741  449185 pod_ready.go:86] duration metric: took 398.024461ms for pod "kube-proxy-26llr" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.406752  449185 pod_ready.go:83] waiting for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.603217  449185 request.go:683] "Waited before sending request" delay="196.391784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-5cjcj"
	I1212 21:07:07.802591  449185 request.go:683] "Waited before sending request" delay="195.268704ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:07.806437  449185 pod_ready.go:94] pod "kube-proxy-5cjcj" is "Ready"
	I1212 21:07:07.806468  449185 pod_ready.go:86] duration metric: took 399.70889ms for pod "kube-proxy-5cjcj" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:07.806478  449185 pod_ready.go:83] waiting for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.003374  449185 request.go:683] "Waited before sending request" delay="196.807041ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tgx5j"
	I1212 21:07:08.203254  449185 request.go:683] "Waited before sending request" delay="193.281921ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:08.206488  449185 pod_ready.go:94] pod "kube-proxy-tgx5j" is "Ready"
	I1212 21:07:08.206516  449185 pod_ready.go:86] duration metric: took 400.031584ms for pod "kube-proxy-tgx5j" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.206527  449185 pod_ready.go:83] waiting for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.402890  449185 request.go:683] "Waited before sending request" delay="196.283952ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-v8lm4"
	I1212 21:07:08.602890  449185 request.go:683] "Waited before sending request" delay="190.306444ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:08.606678  449185 pod_ready.go:94] pod "kube-proxy-v8lm4" is "Ready"
	I1212 21:07:08.606704  449185 pod_ready.go:86] duration metric: took 400.170499ms for pod "kube-proxy-v8lm4" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:08.803166  449185 request.go:683] "Waited before sending request" delay="196.329375ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I1212 21:07:08.807939  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.006982  449185 request.go:683] "Waited before sending request" delay="198.916082ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703"
	I1212 21:07:09.203284  449185 request.go:683] "Waited before sending request" delay="192.346692ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703"
	I1212 21:07:09.206489  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703" is "Ready"
	I1212 21:07:09.206522  449185 pod_ready.go:86] duration metric: took 398.549635ms for pod "kube-scheduler-ha-008703" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.206532  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.402973  449185 request.go:683] "Waited before sending request" delay="196.306934ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m02"
	I1212 21:07:09.603345  449185 request.go:683] "Waited before sending request" delay="192.346225ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m02"
	I1212 21:07:09.611536  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m02" is "Ready"
	I1212 21:07:09.611565  449185 pod_ready.go:86] duration metric: took 405.026929ms for pod "kube-scheduler-ha-008703-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.611575  449185 pod_ready.go:83] waiting for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:09.802963  449185 request.go:683] "Waited before sending request" delay="191.311533ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-008703-m03"
	I1212 21:07:10.004827  449185 request.go:683] "Waited before sending request" delay="198.485333ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-008703-m03"
	I1212 21:07:10.012647  449185 pod_ready.go:94] pod "kube-scheduler-ha-008703-m03" is "Ready"
	I1212 21:07:10.012677  449185 pod_ready.go:86] duration metric: took 401.094897ms for pod "kube-scheduler-ha-008703-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:07:10.012691  449185 pod_ready.go:40] duration metric: took 6.411220695s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:07:10.085120  449185 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 21:07:10.090453  449185 out.go:179] * Done! kubectl is now configured to use "ha-008703" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.084643835Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=4f025e76-4eca-4fb1-b55a-f8d9a43fa536 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087572223Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.087672564Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095689671Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.0959013Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/passwd: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.095933095Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/eb92904b79612128723b08cf808f293d7aa852c53deebc7388a003f7a25a6f9f/merged/etc/group: no such file or directory"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.096211382Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.136290189Z" level=info msg="Created container 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145: kube-system/storage-provisioner/storage-provisioner" id=8ebdfa7e-5f7d-4824-b4b7-0fe2edd10aff name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.137414204Z" level=info msg="Starting container: 5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145" id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:06:53 ha-008703 crio[623]: time="2025-12-12T21:06:53.14248122Z" level=info msg="Started container" PID=1398 containerID=5129752cc0a67709f0a9d2413d338da1db9d667fdd529f45eed404b8f11da145 description=kube-system/storage-provisioner/storage-provisioner id=c9a226e6-422b-41f8-9e9f-add9192400a7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.077353049Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.084667544Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090321422Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.090434276Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.101511448Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108846054Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.108901554Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125800597Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.125957924Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.126043537Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133398738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133546145Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.133624332Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148814452Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:07:02 ha-008703 crio[623]: time="2025-12-12T21:07:02.148949928Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                 NAMESPACE
	5129752cc0a67       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Running             storage-provisioner       2                   1b6b1faf503c8       storage-provisioner                 kube-system
	3f4c5923951e8       89a35e2ebb6b938201966889b5e8c85b931db6432c5643966116cd1c28bf45cd   2 minutes ago       Running             busybox                   1                   9a656c52a260b       busybox-7b57f96db7-tczdt            default
	560dd3383ed66       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   1                   2f24e16e55927       coredns-66bc5c9577-8tvqx            kube-system
	7cef3eaf30308       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   2 minutes ago       Running             kindnet-cni               1                   021217a0cf931       kindnet-f7h24                       kube-system
	82dd101ece4d1       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6   2 minutes ago       Exited              storage-provisioner       1                   1b6b1faf503c8       storage-provisioner                 kube-system
	ad94d81034c43       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   2 minutes ago       Running             coredns                   1                   b75479f05351c       coredns-66bc5c9577-kls2t            kube-system
	2b11faa987b07       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   2 minutes ago       Running             kube-proxy                1                   66c81b9e2ff38       kube-proxy-tgx5j                    kube-system
	f08cf114510a2       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   2 minutes ago       Running             kube-controller-manager   8                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	93fc3054083af       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   2 minutes ago       Running             kube-apiserver            8                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	05ba874359221       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   3 minutes ago       Running             kube-scheduler            2                   60ffed268d568       kube-scheduler-ha-008703            kube-system
	6e71e63256727       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   3 minutes ago       Exited              kube-apiserver            7                   8176618f6ba71       kube-apiserver-ha-008703            kube-system
	62a05b797d322       369db9dfa6fa96c1f4a0f3c827dbe864b5ded1802c8b4810b5ff9fcc5f5f2c70   3 minutes ago       Running             kube-vip                  1                   8e01afee41b4c       kube-vip-ha-008703                  kube-system
	03159ef735d03       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   3 minutes ago       Exited              kube-controller-manager   7                   19bf9c82b9d81       kube-controller-manager-ha-008703   kube-system
	e2542b7b3b0ad       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   3 minutes ago       Running             etcd                      3                   e36007e1324cc       etcd-ha-008703                      kube-system
	
	
	==> coredns [560dd3383ed66f823e585260ec4823152488386a1e71bacea6cd9ca156adb2d8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:52286 - 29430 "HINFO IN 4498128949033305171.1950480245235256825. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020264931s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [ad94d81034c434b44c842f2117ddb8a51227d702a250a41dac1fac6dcf4f0e1c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:36509 - 26980 "HINFO IN 2040533104487656964.3099826236879850204. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003954694s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-008703
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_12_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:32 +0000   Fri, 12 Dec 2025 20:52:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-008703
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                6ff1a8bd-14d1-41ae-8cb8-9156f60dd654
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-tczdt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-66bc5c9577-8tvqx             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 coredns-66bc5c9577-kls2t             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     16m
	  kube-system                 etcd-ha-008703                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-f7h24                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-008703             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-008703    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-tgx5j                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-008703             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-008703                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   Starting                 2m29s                kube-proxy       
	  Normal   NodeHasSufficientPID     16m (x8 over 16m)    kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)    kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)    kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   Starting                 16m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    16m                  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  16m                  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           16m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   NodeReady                15m                  kubelet          Node ha-008703 status is now: NodeReady
	  Normal   RegisteredNode           14m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   Starting                 3m8s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m8s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m8s (x8 over 3m8s)  kubelet          Node ha-008703 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m8s (x8 over 3m8s)  kubelet          Node ha-008703 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m8s (x8 over 3m8s)  kubelet          Node ha-008703 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m29s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           2m28s                node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           112s                 node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	  Normal   RegisteredNode           56s                  node-controller  Node ha-008703 event: Registered Node ha-008703 in Controller
	
	
	Name:               ha-008703-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_52_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:52:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:52:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:06:21 +0000   Fri, 12 Dec 2025 20:53:33 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-008703-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                ca808c21-ecc5-4ee7-9940-dffdef1da5b2
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-hltw8                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-008703-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         16m
	  kube-system                 kindnet-blbfb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-ha-008703-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-ha-008703-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-5cjcj                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-ha-008703-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-vip-ha-008703-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 2m11s                kube-proxy       
	  Normal   Starting                 11m                  kube-proxy       
	  Normal   Starting                 16m                  kube-proxy       
	  Normal   RegisteredNode           16m                  node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           16m                  node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           14m                  node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)    kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x8 over 12m)    kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 12m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)    kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           11m                  node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   Starting                 3m4s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 3m4s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  3m4s (x8 over 3m4s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m4s (x8 over 3m4s)  kubelet          Node ha-008703-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m4s (x8 over 3m4s)  kubelet          Node ha-008703-m02 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m29s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           2m28s                node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           112s                 node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	  Normal   RegisteredNode           56s                  node-controller  Node ha-008703-m02 event: Registered Node ha-008703-m02 in Controller
	
	
	Name:               ha-008703-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_54_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:54:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:08:52 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:08:52 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:08:52 +0000   Fri, 12 Dec 2025 20:54:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:08:52 +0000   Fri, 12 Dec 2025 20:54:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-008703-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                fa4c05be-b5d2-4bf0-a4b6-630b820e0e0a
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-kc6ms                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 etcd-ha-008703-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         14m
	  kube-system                 kindnet-6dvv4                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      14m
	  kube-system                 kube-apiserver-ha-008703-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-ha-008703-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-v8lm4                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-ha-008703-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-vip-ha-008703-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   CIDRAssignmentFailed     14m                    cidrAllocator    Node ha-008703-m03 status is now: CIDRAssignmentFailed
	  Normal   RegisteredNode           14m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           2m29s                  node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           2m28s                  node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   NodeHasSufficientMemory  2m27s (x8 over 2m27s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m27s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m27s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    2m27s (x8 over 2m27s)  kubelet          Node ha-008703-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m27s (x8 over 2m27s)  kubelet          Node ha-008703-m03 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           112s                   node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	  Normal   RegisteredNode           56s                    node-controller  Node ha-008703-m03 event: Registered Node ha-008703-m03 in Controller
	
	
	Name:               ha-008703-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T20_55_24_0700
	                    minikube.k8s.io/version=v1.37.0
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 20:55:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:55:23 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:07:49 +0000   Fri, 12 Dec 2025 20:56:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.5
	  Hostname:    ha-008703-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                8a9366c1-4fff-44a3-a6b8-824607a69efc
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fwsws       100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-proxy-26llr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-1Gi      0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	  hugepages-32Mi     0 (0%)     0 (0%)
	  hugepages-64Ki     0 (0%)     0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 95s                  kube-proxy       
	  Normal   Starting                 13m                  kube-proxy       
	  Normal   NodeHasSufficientMemory  13m (x3 over 13m)    kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   CIDRAssignmentFailed     13m                  cidrAllocator    Node ha-008703-m04 status is now: CIDRAssignmentFailed
	  Normal   NodeHasSufficientPID     13m (x3 over 13m)    kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x3 over 13m)    kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 13m                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   RegisteredNode           13m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           13m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   NodeReady                12m                  kubelet          Node ha-008703-m04 status is now: NodeReady
	  Normal   RegisteredNode           11m                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           2m29s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           2m28s                node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   Starting                 119s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 119s                 kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  116s (x8 over 119s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 119s)  kubelet          Node ha-008703-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x8 over 119s)  kubelet          Node ha-008703-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           112s                 node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	  Normal   RegisteredNode           56s                  node-controller  Node ha-008703-m04 event: Registered Node ha-008703-m04 in Controller
	
	
	Name:               ha-008703-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-008703-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=ha-008703
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_12_12T21_08_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 21:08:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-008703-m05
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:08:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:08:46 +0000   Fri, 12 Dec 2025 21:08:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.6
	  Hostname:    ha-008703-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                217ce67c-c46d-4546-ab8f-db6ccfc738bf
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.4.0/24
	PodCIDRs:                     10.244.4.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-008703-m05                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         50s
	  kube-system                 kindnet-2dqw9                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      54s
	  kube-system                 kube-apiserver-ha-008703-m05             250m (12%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-controller-manager-ha-008703-m05    200m (10%)    0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-proxy-l5ppw                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kube-system                 kube-scheduler-ha-008703-m05             100m (5%)     0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 kube-vip-ha-008703-m05                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        48s   kube-proxy       
	  Normal  RegisteredNode  54s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	  Normal  RegisteredNode  51s   node-controller  Node ha-008703-m05 event: Registered Node ha-008703-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014528] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.501545] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.032660] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.806046] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.207098] kauditd_printk_skb: 39 callbacks suppressed
	[Dec12 18:13] hrtimer: interrupt took 4831498 ns
	[Dec12 20:09] kauditd_printk_skb: 8 callbacks suppressed
	[Dec12 20:10] overlayfs: idmapped layers are currently not supported
	[  +0.071952] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Dec12 20:16] overlayfs: idmapped layers are currently not supported
	[Dec12 20:17] overlayfs: idmapped layers are currently not supported
	[Dec12 20:35] overlayfs: idmapped layers are currently not supported
	[Dec12 20:52] overlayfs: idmapped layers are currently not supported
	[ +33.094252] overlayfs: idmapped layers are currently not supported
	[Dec12 20:53] overlayfs: idmapped layers are currently not supported
	[Dec12 20:55] overlayfs: idmapped layers are currently not supported
	[Dec12 20:56] overlayfs: idmapped layers are currently not supported
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	[Dec12 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.613273] overlayfs: idmapped layers are currently not supported
	[Dec12 21:06] overlayfs: idmapped layers are currently not supported
	[Dec12 21:07] overlayfs: idmapped layers are currently not supported
	[ +26.617506] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [e2542b7b3b0add4c1c8e1167b6f86cc40b8c70e55d0db7ae97014db17bfee8b2] <==
	{"level":"warn","ts":"2025-12-12T21:07:47.405190Z","caller":"etcdhttp/peer.go:152","msg":"failed to promote a member","member-id":"3f1ca3d03b4df108","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2025-12-12T21:07:47.411112Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-12T21:07:47.411150Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.720775Z","caller":"rafthttp/snapshot_sender.go:131","msg":"sent database snapshot","snapshot-index":2853,"remote-peer-id":"3f1ca3d03b4df108","bytes":4982666,"size":"5.0 MB"}
	{"level":"warn","ts":"2025-12-12T21:07:47.792749Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:07:47.837402Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:07:47.868799Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3f1ca3d03b4df108","error":"failed to write 3f1ca3d03b4df108 on stream Message (write tcp 192.168.49.2:2380->192.168.49.6:34384: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-12T21:07:47.869102Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.888004Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(4547689838480847112 7042564765798820169 12593026477526642892 15833178754663563274)"}
	{"level":"info","ts":"2025-12-12T21:07:47.888231Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.888302Z","caller":"etcdserver/server.go:1768","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:47.927151Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.196993Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.219871Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"warn","ts":"2025-12-12T21:07:48.246127Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"3f1ca3d03b4df108","error":"failed to write 3f1ca3d03b4df108 on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.6:34368: write: broken pipe)"}
	{"level":"warn","ts":"2025-12-12T21:07:48.246456Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.247353Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","stream-type":"stream Message"}
	{"level":"info","ts":"2025-12-12T21:07:48.247474Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.247512Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:48.262693Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-12-12T21:07:48.262741Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"3f1ca3d03b4df108"}
	{"level":"info","ts":"2025-12-12T21:07:56.975244Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-12T21:08:01.070631Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-12T21:08:05.250404Z","caller":"etcdserver/server.go:2262","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-12-12T21:08:17.721286Z","caller":"etcdserver/server.go:1872","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"3f1ca3d03b4df108","bytes":4982666,"size":"5.0 MB","took":"31.235800309s"}
	
	
	==> kernel <==
	 21:08:54 up  3:51,  0 user,  load average: 1.86, 1.81, 1.27
	Linux ha-008703 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7cef3eaf30308ab6e267a8568bc724dbe47546cc79d171e489dd52fca0f76a09] <==
	I1212 21:08:32.075333       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:08:32.075628       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:08:32.075656       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:08:32.075875       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1212 21:08:32.075892       1 main.go:324] Node ha-008703-m05 has CIDR [10.244.4.0/24] 
	I1212 21:08:42.083775       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:08:42.083814       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:08:42.084010       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1212 21:08:42.084025       1 main.go:324] Node ha-008703-m05 has CIDR [10.244.4.0/24] 
	I1212 21:08:42.084102       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:08:42.084117       1 main.go:301] handling current node
	I1212 21:08:42.084130       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:08:42.084136       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	I1212 21:08:42.084199       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:08:42.084206       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:08:52.074447       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I1212 21:08:52.074777       1 main.go:324] Node ha-008703-m03 has CIDR [10.244.2.0/24] 
	I1212 21:08:52.075092       1 main.go:297] Handling node with IPs: map[192.168.49.5:{}]
	I1212 21:08:52.075139       1 main.go:324] Node ha-008703-m04 has CIDR [10.244.3.0/24] 
	I1212 21:08:52.075307       1 main.go:297] Handling node with IPs: map[192.168.49.6:{}]
	I1212 21:08:52.075374       1 main.go:324] Node ha-008703-m05 has CIDR [10.244.4.0/24] 
	I1212 21:08:52.075523       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1212 21:08:52.075562       1 main.go:301] handling current node
	I1212 21:08:52.075616       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I1212 21:08:52.075645       1 main.go:324] Node ha-008703-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [6e71e63256727335b637c10c11453815d5622c8d5eb3fb9654535f5b4b692c2f] <==
	I1212 21:05:47.565735       1 server.go:150] Version: v1.34.2
	I1212 21:05:47.569343       1 server.go:152] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1212 21:05:49.281036       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=resource.k8s.io/v1alpha3
	W1212 21:05:49.281145       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=coordination.k8s.io/v1alpha2
	W1212 21:05:49.281179       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=rbac.authorization.k8s.io/v1alpha1
	W1212 21:05:49.281210       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storagemigration.k8s.io/v1alpha1
	W1212 21:05:49.281240       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=authentication.k8s.io/v1alpha1
	W1212 21:05:49.281267       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=imagepolicy.k8s.io/v1alpha1
	W1212 21:05:49.281295       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=scheduling.k8s.io/v1alpha1
	W1212 21:05:49.281322       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=storage.k8s.io/v1alpha1
	W1212 21:05:49.281350       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=admissionregistration.k8s.io/v1alpha1
	W1212 21:05:49.281379       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=internal.apiserver.k8s.io/v1alpha1
	W1212 21:05:49.281408       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=certificates.k8s.io/v1alpha1
	W1212 21:05:49.281437       1 api_enablement.go:112] alpha api enabled with emulated version 1.34 instead of the binary's version 1.34.2, this is unsupported, proceed at your own risk: api=node.k8s.io/v1alpha1
	W1212 21:05:49.315159       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:05:49.315278       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	I1212 21:05:49.320436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node_authorizer"
	I1212 21:05:49.332820       1 shared_informer.go:349] "Waiting for caches to sync" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:05:49.333128       1 plugins.go:157] Loaded 14 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,PodTopologyLabels,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I1212 21:05:49.333192       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1212 21:05:49.333470       1 instance.go:239] Using reconciler: lease
	W1212 21:05:49.335311       1 logging.go:55] [core] [Channel #7 SubChannel #8]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1212 21:06:09.313704       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F1212 21:06:09.334486       1 instance.go:232] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [93fc3054083af7a4f11519559898692bcb87a0a869c0e823fd96f50def2f02cd] <==
	I1212 21:06:20.368230       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1212 21:06:20.400872       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 21:06:20.412450       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 21:06:20.421494       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 21:06:20.413161       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 21:06:20.433292       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 21:06:20.435830       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 21:06:20.439607       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 21:06:20.439971       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1212 21:06:20.446200       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 21:06:20.446507       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1212 21:06:20.451816       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 21:06:20.466902       1 cache.go:39] Caches are synced for autoregister controller
	W1212 21:06:20.494872       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.3 192.168.49.4]
	I1212 21:06:20.498501       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 21:06:20.540491       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 21:06:20.544831       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1212 21:06:20.560023       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I1212 21:06:20.915382       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 21:06:21.151536       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1212 21:06:24.277503       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I1212 21:06:26.132404       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 21:06:26.286031       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1212 21:06:26.435234       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	W1212 21:06:34.277202       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.3]
	
	
	==> kube-controller-manager [03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d] <==
	I1212 21:05:49.621747       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:05:50.751392       1 controllermanager.go:191] "Starting" version="v1.34.2"
	I1212 21:05:50.752418       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:05:50.756190       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1212 21:05:50.756306       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 21:05:50.756352       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1212 21:05:50.756362       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E1212 21:06:20.286877       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: forbidden: User \"system:kube-controller-manager\" cannot get path \"/healthz\""
	
	
	==> kube-controller-manager [f08cf114510a22705e6eddaabf72535ab357ca9404fe3342c1903bc51578da78] <==
	I1212 21:06:25.956884       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 21:06:25.956955       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1212 21:06:25.958970       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 21:06:25.962893       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 21:06:25.966650       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1212 21:06:25.966831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1212 21:06:25.966929       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 21:06:25.970777       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1212 21:06:25.977116       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1212 21:06:25.978294       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 21:06:25.978569       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1212 21:06:25.979499       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 21:06:25.983384       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1212 21:06:25.991347       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1212 21:06:25.992778       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 21:06:26.003403       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 21:06:26.005063       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:07:03.404820       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-88mnq\": the object has been modified; please apply your changes to the latest version and try again"
	I1212 21:07:03.412728       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"0e70dacf-1fbe-4ce7-930f-4790639720ae", APIVersion:"v1", ResourceVersion:"293", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-88mnq EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-88mnq": the object has been modified; please apply your changes to the latest version and try again
	E1212 21:07:59.838789       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-7vpdp failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-7vpdp\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1212 21:08:00.368924       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-008703-m04"
	I1212 21:08:00.369535       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-008703-m05\" does not exist"
	I1212 21:08:00.462544       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-008703-m05" podCIDRs=["10.244.4.0/24"]
	I1212 21:08:00.966105       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-008703-m05"
	I1212 21:08:46.095858       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-008703-m04"
	
	
	==> kube-proxy [2b11faa987b07a654a1ecb1110634491c33e925915fa00680eccd4a7874806fc] <==
	I1212 21:06:23.734028       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:06:24.050201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:06:24.251547       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:06:24.251592       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1212 21:06:24.251667       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:06:24.378453       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:06:24.378516       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:06:24.392940       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:06:24.393314       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:06:24.393544       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:06:24.394794       1 config.go:200] "Starting service config controller"
	I1212 21:06:24.394851       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:06:24.394892       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:06:24.394921       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:06:24.394957       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:06:24.394983       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:06:24.395714       1 config.go:309] "Starting node config controller"
	I1212 21:06:24.398250       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:06:24.398321       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:06:24.497136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:06:24.497308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:06:24.497322       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [05ba874359221bdf846b1fb8dbe911f962d4cf06c723c81f7a60410d0ca7fa2b] <==
	E1212 21:06:20.369105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1212 21:06:20.369154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:06:20.369207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:06:20.369802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:06:20.369869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:06:20.369925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:06:20.369973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:06:20.370030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:06:20.370079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:06:20.370124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:06:20.371252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:06:20.371299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:06:20.371338       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:06:20.438949       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:06:20.444983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:06:20.445109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I1212 21:06:20.470730       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1212 21:08:00.700964       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-2dqw9\": pod kindnet-2dqw9 is already assigned to node \"ha-008703-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-2dqw9" node="ha-008703-m05"
	E1212 21:08:00.711320       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 168f02be-0130-4f0b-8920-a4de479cff03(kube-system/kindnet-2dqw9) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-2dqw9"
	E1212 21:08:00.711432       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-2dqw9\": pod kindnet-2dqw9 is already assigned to node \"ha-008703-m05\"" logger="UnhandledError" pod="kube-system/kindnet-2dqw9"
	E1212 21:08:00.701131       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-l5ppw\": pod kube-proxy-l5ppw is already assigned to node \"ha-008703-m05\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-l5ppw" node="ha-008703-m05"
	E1212 21:08:00.711520       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod e8b7e3de-7dbc-4512-abcb-5ec2ceffbac4(kube-system/kube-proxy-l5ppw) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-l5ppw"
	E1212 21:08:00.718215       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-l5ppw\": pod kube-proxy-l5ppw is already assigned to node \"ha-008703-m05\"" logger="UnhandledError" pod="kube-system/kube-proxy-l5ppw"
	I1212 21:08:00.718284       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-l5ppw" node="ha-008703-m05"
	I1212 21:08:00.718655       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-2dqw9" node="ha-008703-m05"
	
	
	==> kubelet <==
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.676261     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ha-008703\" already exists" pod="kube-system/kube-controller-manager-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.676518     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.684227     764 apiserver.go:52] "Watching apiserver"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.715180     764 kubelet.go:3203] "Trying to delete pod" pod="kube-system/kube-vip-ha-008703" podUID="13ad7cce-3343-4a6d-b066-b55715ef2727"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.733772     764 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c574b029f9f86252bb40df91aa285cf" path="/var/lib/kubelet/pods/4c574b029f9f86252bb40df91aa285cf/volumes"
	Dec 12 21:06:20 ha-008703 kubelet[764]: E1212 21:06:20.737750     764 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ha-008703\" already exists" pod="kube-system/kube-scheduler-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772520     764 kubelet.go:3209] "Deleted mirror pod as it didn't match the static Pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.772704     764 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-vip-ha-008703"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.789443     764 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.857272     764 scope.go:117] "RemoveContainer" containerID="03159ef735d037e6e2bd96d596901e88dca8d0148f6ec78c4a5b8a6ed803cd1d"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891614     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-xtables-lock\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.891885     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee2850f7-5474-48e9-b8dc-f9e14292127e-lib-modules\") pod \"kube-proxy-tgx5j\" (UID: \"ee2850f7-5474-48e9-b8dc-f9e14292127e\") " pod="kube-system/kube-proxy-tgx5j"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892133     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-xtables-lock\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892297     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d57f23f-4461-4d86-b91f-e2628d8874ab-tmp\") pod \"storage-provisioner\" (UID: \"2d57f23f-4461-4d86-b91f-e2628d8874ab\") " pod="kube-system/storage-provisioner"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.892406     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-cni-cfg\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.898926     764 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec-lib-modules\") pod \"kindnet-f7h24\" (UID: \"d9d75e5e-f77e-4a7c-8e0f-d9807515a3ec\") " pod="kube-system/kindnet-f7h24"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.897461     764 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-vip-ha-008703" podStartSLOduration=0.897445384 podStartE2EDuration="897.445384ms" podCreationTimestamp="2025-12-12 21:06:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-12 21:06:20.850652974 +0000 UTC m=+34.291145116" watchObservedRunningTime="2025-12-12 21:06:20.897445384 +0000 UTC m=+34.337937510"
	Dec 12 21:06:20 ha-008703 kubelet[764]: I1212 21:06:20.972495     764 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.192647     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e WatchSource:0}: Error finding container b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e: Status 404 returned error can't find the container with id b75479f05351cdf798fa80b4e1c252898fa67808e7d81a1af33b3519aae06b7e
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.402414     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226 WatchSource:0}: Error finding container 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226: Status 404 returned error can't find the container with id 1b6b1faf503c87c4c44d12134b2dac404566a4ebc1082f12e63180a299c79226
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.434279     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced WatchSource:0}: Error finding container 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced: Status 404 returned error can't find the container with id 021217a0cf93140b9a5c382c2f846015b7e95ddb0abd41dde0834754a427bced
	Dec 12 21:06:21 ha-008703 kubelet[764]: W1212 21:06:21.570067     764 manager.go:1169] Failed to process watch event {EventType:0 Name:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio-2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967 WatchSource:0}: Error finding container 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967: Status 404 returned error can't find the container with id 2f24e16e55927a827b07d1da2418da7e91e09a57650064d988371c48193e9967
	Dec 12 21:06:46 ha-008703 kubelet[764]: E1212 21:06:46.699197     764 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50"
	Dec 12 21:06:46 ha-008703 kubelet[764]: I1212 21:06:46.699251     764 kuberuntime_gc.go:364] "Error getting ContainerStatus for containerID" containerID="f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50" err="rpc error: code = NotFound desc = could not find container \"f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50\": container with ID starting with f48af9c1b63d316a272c7c77c9d10c0884bff67233924dfabc610a6200c4af50 not found: ID does not exist"
	Dec 12 21:06:53 ha-008703 kubelet[764]: I1212 21:06:53.074350     764 scope.go:117] "RemoveContainer" containerID="82dd101ece4d11a82b5e84808cb05db3a78e943db22ae1196fbeeda7f49c4b53"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-008703 -n ha-008703
helpers_test.go:270: (dbg) Run:  kubectl --context ha-008703 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (4.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.16s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-759631 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p json-output-759631 --output=json --user=testUser: exit status 80 (2.161079051s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cae79720-3ce1-48db-8616-08403227e291","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-759631 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"409d8344-5e4b-4779-af48-bd5907e2996d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T21:10:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"787242e1-7d53-4efc-8a0d-186bfc75727a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 pause -p json-output-759631 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.16s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (2.24s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-759631 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-arm64 unpause -p json-output-759631 --output=json --user=testUser: exit status 80 (2.239734145s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"760cf73a-542b-4d05-a7bd-7d362b2a7fd0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-759631 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e2b8f8af-fbc0-4dda-afba-ee5bb03c6bb3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-12T21:10:07Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"3d5d298c-2cdf-4db8-9018-2bcc6f4a5805","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-arm64 unpause -p json-output-759631 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (2.24s)

                                                
                                    
x
+
TestKubernetesUpgrade (793.51s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 21:27:35.804795  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:27:44.062604  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.067566037s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-905307
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-905307: (1.503626835s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-905307 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-905307 status --format={{.Host}}: exit status 7 (96.426776ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: exit status 109 (12m22.479896875s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-905307] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-905307" primary control-plane node in "kubernetes-upgrade-905307" cluster
	* Pulling base image v0.0.48-1765505794-22112 ...
	* Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:27:56.382724  543793 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:27:56.382925  543793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:27:56.382988  543793 out.go:374] Setting ErrFile to fd 2...
	I1212 21:27:56.383009  543793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:27:56.383332  543793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:27:56.384524  543793 out.go:368] Setting JSON to false
	I1212 21:27:56.385364  543793 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15029,"bootTime":1765559848,"procs":162,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:27:56.385471  543793 start.go:143] virtualization:  
	I1212 21:27:56.389061  543793 out.go:179] * [kubernetes-upgrade-905307] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:27:56.392275  543793 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:27:56.392443  543793 notify.go:221] Checking for updates...
	I1212 21:27:56.396611  543793 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:27:56.399739  543793 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:27:56.404545  543793 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:27:56.408349  543793 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:27:56.411417  543793 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:27:56.414802  543793 config.go:182] Loaded profile config "kubernetes-upgrade-905307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1212 21:27:56.415815  543793 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:27:56.476110  543793 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:27:56.476237  543793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:27:56.560601  543793 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:63 SystemTime:2025-12-12 21:27:56.550181006 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:27:56.560714  543793 docker.go:319] overlay module found
	I1212 21:27:56.564288  543793 out.go:179] * Using the docker driver based on existing profile
	I1212 21:27:56.567162  543793 start.go:309] selected driver: docker
	I1212 21:27:56.567183  543793 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-905307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-905307 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:27:56.567281  543793 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:27:56.567929  543793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:27:56.681909  543793 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:35 OomKillDisable:true NGoroutines:59 SystemTime:2025-12-12 21:27:56.669560555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:27:56.682243  543793 cni.go:84] Creating CNI manager for ""
	I1212 21:27:56.682284  543793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 21:27:56.682316  543793 start.go:353] cluster config:
	{Name:kubernetes-upgrade-905307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-905307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:27:56.685736  543793 out.go:179] * Starting "kubernetes-upgrade-905307" primary control-plane node in "kubernetes-upgrade-905307" cluster
	I1212 21:27:56.688663  543793 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:27:56.691786  543793 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:27:56.694775  543793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 21:27:56.694822  543793 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 21:27:56.694832  543793 cache.go:65] Caching tarball of preloaded images
	I1212 21:27:56.694922  543793 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:27:56.694932  543793 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on crio
	I1212 21:27:56.695041  543793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/config.json ...
	I1212 21:27:56.695247  543793 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:27:56.729557  543793 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:27:56.729576  543793 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:27:56.729589  543793 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:27:56.729620  543793 start.go:360] acquireMachinesLock for kubernetes-upgrade-905307: {Name:mkbf8e4a6b50bc0b0a584992b8590666ebb2d0eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:27:56.729675  543793 start.go:364] duration metric: took 35.38µs to acquireMachinesLock for "kubernetes-upgrade-905307"
	I1212 21:27:56.729695  543793 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:27:56.729701  543793 fix.go:54] fixHost starting: 
	I1212 21:27:56.729961  543793 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-905307 --format={{.State.Status}}
	I1212 21:27:56.746223  543793 fix.go:112] recreateIfNeeded on kubernetes-upgrade-905307: state=Stopped err=<nil>
	W1212 21:27:56.746251  543793 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:27:56.749509  543793 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-905307" ...
	I1212 21:27:56.749616  543793 cli_runner.go:164] Run: docker start kubernetes-upgrade-905307
	I1212 21:27:57.061609  543793 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-905307 --format={{.State.Status}}
	I1212 21:27:57.102069  543793 kic.go:430] container "kubernetes-upgrade-905307" state is running.
	I1212 21:27:57.102449  543793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-905307
	I1212 21:27:57.140443  543793 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/config.json ...
	I1212 21:27:57.140660  543793 machine.go:94] provisionDockerMachine start ...
	I1212 21:27:57.140726  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:27:57.170081  543793 main.go:143] libmachine: Using SSH client type: native
	I1212 21:27:57.170405  543793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1212 21:27:57.170414  543793 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:27:57.174692  543793 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:28:00.453704  543793 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-905307
	
	I1212 21:28:00.453737  543793 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-905307"
	I1212 21:28:00.453816  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:00.476953  543793 main.go:143] libmachine: Using SSH client type: native
	I1212 21:28:00.477328  543793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1212 21:28:00.477342  543793 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-905307 && echo "kubernetes-upgrade-905307" | sudo tee /etc/hostname
	I1212 21:28:00.665011  543793 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-905307
	
	I1212 21:28:00.665130  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:00.686244  543793 main.go:143] libmachine: Using SSH client type: native
	I1212 21:28:00.686585  543793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1212 21:28:00.686607  543793 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-905307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-905307/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-905307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:28:00.840606  543793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:28:00.840680  543793 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:28:00.840718  543793 ubuntu.go:190] setting up certificates
	I1212 21:28:00.840737  543793 provision.go:84] configureAuth start
	I1212 21:28:00.840805  543793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-905307
	I1212 21:28:00.859991  543793 provision.go:143] copyHostCerts
	I1212 21:28:00.860076  543793 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:28:00.860091  543793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:28:00.860167  543793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:28:00.860265  543793 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:28:00.860276  543793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:28:00.860303  543793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:28:00.860360  543793 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:28:00.860389  543793 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:28:00.860419  543793 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:28:00.860472  543793 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-905307 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-905307 localhost minikube]
	I1212 21:28:00.944555  543793 provision.go:177] copyRemoteCerts
	I1212 21:28:00.944671  543793 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:28:00.944746  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:00.961510  543793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/kubernetes-upgrade-905307/id_rsa Username:docker}
	I1212 21:28:01.068831  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:28:01.085928  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:28:01.103987  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:28:01.122757  543793 provision.go:87] duration metric: took 281.9947ms to configureAuth
	I1212 21:28:01.122789  543793 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:28:01.122989  543793 config.go:182] Loaded profile config "kubernetes-upgrade-905307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 21:28:01.123096  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:01.140894  543793 main.go:143] libmachine: Using SSH client type: native
	I1212 21:28:01.141226  543793 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33385 <nil> <nil>}
	I1212 21:28:01.141253  543793 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:28:01.616408  543793 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:28:01.616431  543793 machine.go:97] duration metric: took 4.475760722s to provisionDockerMachine
	I1212 21:28:01.616444  543793 start.go:293] postStartSetup for "kubernetes-upgrade-905307" (driver="docker")
	I1212 21:28:01.616457  543793 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:28:01.616524  543793 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:28:01.616563  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:01.637613  543793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/kubernetes-upgrade-905307/id_rsa Username:docker}
	I1212 21:28:01.762868  543793 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:28:01.767989  543793 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:28:01.768015  543793 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:28:01.768026  543793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:28:01.768080  543793 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:28:01.768158  543793 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:28:01.768266  543793 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:28:01.782314  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:28:01.826345  543793 start.go:296] duration metric: took 209.884169ms for postStartSetup
	I1212 21:28:01.826448  543793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:28:01.826494  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:01.858932  543793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/kubernetes-upgrade-905307/id_rsa Username:docker}
	I1212 21:28:02.009387  543793 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:28:02.020514  543793 fix.go:56] duration metric: took 5.290804793s for fixHost
	I1212 21:28:02.020545  543793 start.go:83] releasing machines lock for "kubernetes-upgrade-905307", held for 5.290861359s
	I1212 21:28:02.020635  543793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-905307
	I1212 21:28:02.052499  543793 ssh_runner.go:195] Run: cat /version.json
	I1212 21:28:02.052588  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:02.052769  543793 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:28:02.052816  543793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-905307
	I1212 21:28:02.106929  543793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/kubernetes-upgrade-905307/id_rsa Username:docker}
	I1212 21:28:02.110799  543793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33385 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/kubernetes-upgrade-905307/id_rsa Username:docker}
	I1212 21:28:02.245333  543793 ssh_runner.go:195] Run: systemctl --version
	I1212 21:28:02.383997  543793 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:28:02.447641  543793 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:28:02.453086  543793 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:28:02.453164  543793 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:28:02.465005  543793 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:28:02.465029  543793 start.go:496] detecting cgroup driver to use...
	I1212 21:28:02.465062  543793 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:28:02.465108  543793 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:28:02.487617  543793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:28:02.504819  543793 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:28:02.504895  543793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:28:02.527574  543793 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:28:02.544974  543793 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:28:02.720154  543793 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:28:02.865307  543793 docker.go:234] disabling docker service ...
	I1212 21:28:02.865466  543793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:28:02.882366  543793 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:28:02.901070  543793 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:28:03.034096  543793 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:28:03.165710  543793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:28:03.179208  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:28:03.194060  543793 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:28:03.194176  543793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:28:03.203122  543793 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:28:03.203231  543793 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:28:03.213054  543793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:28:03.221982  543793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:28:03.230678  543793 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:28:03.239029  543793 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:28:03.248196  543793 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:28:03.256955  543793 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:28:03.266327  543793 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:28:03.274595  543793 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:28:03.282375  543793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:28:03.399217  543793 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:28:03.571414  543793 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:28:03.571532  543793 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:28:03.575225  543793 start.go:564] Will wait 60s for crictl version
	I1212 21:28:03.575301  543793 ssh_runner.go:195] Run: which crictl
	I1212 21:28:03.578736  543793 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:28:03.602865  543793 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:28:03.602964  543793 ssh_runner.go:195] Run: crio --version
	I1212 21:28:03.634097  543793 ssh_runner.go:195] Run: crio --version
	I1212 21:28:03.670342  543793 out.go:179] * Preparing Kubernetes v1.35.0-beta.0 on CRI-O 1.34.3 ...
	I1212 21:28:03.673130  543793 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-905307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:28:03.689243  543793 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1212 21:28:03.693175  543793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:28:03.703153  543793 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-905307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-905307 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:28:03.703277  543793 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 21:28:03.703333  543793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:28:03.736881  543793 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-beta.0". assuming images are not preloaded.
	I1212 21:28:03.736965  543793 ssh_runner.go:195] Run: which lz4
	I1212 21:28:03.740768  543793 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1212 21:28:03.744507  543793 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1212 21:28:03.744545  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 --> /preloaded.tar.lz4 (306100841 bytes)
	I1212 21:28:05.519160  543793 crio.go:462] duration metric: took 1.778438572s to copy over tarball
	I1212 21:28:05.519271  543793 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1212 21:28:08.235873  543793 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.716574141s)
	I1212 21:28:08.235903  543793 crio.go:469] duration metric: took 2.716712818s to extract the tarball
	I1212 21:28:08.235911  543793 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1212 21:28:08.276583  543793 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:28:08.329274  543793 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:28:08.329298  543793 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:28:08.329306  543793 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 crio true true} ...
	I1212 21:28:08.329405  543793 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-905307 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-905307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:28:08.329491  543793 ssh_runner.go:195] Run: crio config
	I1212 21:28:08.432206  543793 cni.go:84] Creating CNI manager for ""
	I1212 21:28:08.432232  543793 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 21:28:08.432248  543793 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:28:08.432279  543793 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-905307 NodeName:kubernetes-upgrade-905307 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:28:08.432445  543793 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-905307"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:28:08.432565  543793 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1212 21:28:08.447126  543793 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:28:08.447291  543793 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:28:08.465040  543793 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I1212 21:28:08.484182  543793 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1212 21:28:08.500184  543793 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I1212 21:28:08.517113  543793 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:28:08.521441  543793 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1212 21:28:08.532660  543793 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:28:08.657105  543793 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:28:08.678131  543793 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307 for IP: 192.168.76.2
	I1212 21:28:08.678153  543793 certs.go:195] generating shared ca certs ...
	I1212 21:28:08.678170  543793 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:28:08.678342  543793 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:28:08.678396  543793 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:28:08.678409  543793 certs.go:257] generating profile certs ...
	I1212 21:28:08.678533  543793 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/client.key
	I1212 21:28:08.678596  543793 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/apiserver.key.c267819a
	I1212 21:28:08.678656  543793 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/proxy-client.key
	I1212 21:28:08.678787  543793 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:28:08.678835  543793 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:28:08.678847  543793 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:28:08.678887  543793 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:28:08.678916  543793 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:28:08.678957  543793 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:28:08.679007  543793 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:28:08.679743  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:28:08.714315  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:28:08.753755  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:28:08.779791  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:28:08.810887  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1212 21:28:08.837653  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1212 21:28:08.858642  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:28:08.880477  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:28:08.905292  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:28:08.923417  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:28:08.943010  543793 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:28:08.961899  543793 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:28:08.977589  543793 ssh_runner.go:195] Run: openssl version
	I1212 21:28:08.986720  543793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:28:08.995645  543793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:28:09.010242  543793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:28:09.017799  543793 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:28:09.017888  543793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:28:09.064076  543793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:28:09.072612  543793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:28:09.080958  543793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:28:09.089734  543793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:28:09.094292  543793 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:28:09.094370  543793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:28:09.138528  543793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:28:09.146994  543793 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:28:09.156133  543793 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:28:09.165435  543793 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:28:09.170189  543793 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:28:09.170280  543793 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:28:09.220830  543793 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:28:09.232464  543793 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:28:09.236982  543793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:28:09.281062  543793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:28:09.332523  543793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:28:09.376077  543793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:28:09.419684  543793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:28:09.471243  543793 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:28:09.525355  543793 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-905307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-905307 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:28:09.525456  543793 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:28:09.525517  543793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:28:09.585988  543793 cri.go:89] found id: ""
	I1212 21:28:09.586101  543793 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:28:09.597824  543793 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:28:09.597888  543793 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:28:09.598017  543793 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:28:09.616693  543793 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:28:09.617181  543793 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-905307" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:28:09.617348  543793 kubeconfig.go:62] /home/jenkins/minikube-integration/22112-362983/kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-905307" cluster setting kubeconfig missing "kubernetes-upgrade-905307" context setting]
	I1212 21:28:09.617713  543793 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:28:09.618273  543793 kapi.go:59] client config for kubernetes-upgrade-905307: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/kubernetes-upgrade-905307/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:28:09.618860  543793 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:28:09.619032  543793 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:28:09.619056  543793 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:28:09.619079  543793 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:28:09.619114  543793 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:28:09.620563  543793 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:28:09.631932  543793 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-12 21:27:29.440371570 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-12 21:28:08.508781618 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/crio/crio.sock
	   name: "kubernetes-upgrade-905307"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1212 21:28:09.631997  543793 kubeadm.go:1161] stopping kube-system containers ...
	I1212 21:28:09.632023  543793 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1212 21:28:09.632120  543793 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:28:09.669930  543793 cri.go:89] found id: ""
	I1212 21:28:09.670042  543793 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1212 21:28:09.687560  543793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:28:09.698109  543793 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5643 Dec 12 21:27 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 12 21:27 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 12 21:27 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 12 21:27 /etc/kubernetes/scheduler.conf
	
	I1212 21:28:09.698191  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:28:09.707669  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:28:09.717091  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:28:09.726811  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:28:09.726890  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:28:09.737952  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:28:09.747024  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:28:09.747105  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:28:09.756915  543793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:28:09.768282  543793 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:28:09.826466  543793 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:28:10.646897  543793 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:28:10.923433  543793 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:28:11.043233  543793 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1212 21:28:11.111356  543793 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:28:11.111435  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:11.612496  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:12.112362  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:12.611614  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:13.111490  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:13.612316  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:14.111590  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:14.612694  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:15.112392  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:15.612276  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:16.112292  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:16.612345  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:17.111516  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:17.612463  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:18.112328  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:18.611807  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:19.111625  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:19.612479  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:20.112307  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:20.612182  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:21.111754  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:21.611665  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:22.112532  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:22.612525  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:23.111999  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:23.612379  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:24.112557  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:24.612520  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:25.112345  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:25.611526  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:26.112196  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:26.612363  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:27.111975  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:27.611570  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:28.111939  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:28.615060  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:29.112009  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:29.612505  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:30.111589  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:30.611576  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:31.111815  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:31.612025  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:32.112307  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:32.611570  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:33.112276  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:33.611544  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:34.112504  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:34.612254  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:35.111632  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:35.612509  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:36.112203  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:36.612237  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:37.111538  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:37.612453  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:38.112418  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:38.611541  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:39.111571  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:39.611617  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:40.112341  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:40.612464  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:41.112608  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:41.612003  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:42.111701  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:42.611629  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:43.112527  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:43.611590  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:44.111596  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:44.611608  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:45.111582  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:45.611575  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:46.111503  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:46.612302  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:47.112445  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:47.612639  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:48.111502  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:48.611509  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:49.112203  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:49.612273  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:50.112305  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:50.612499  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:51.111970  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:51.612523  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:52.112304  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:52.612310  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:53.112257  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:53.612236  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:54.112552  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:54.612413  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:55.111635  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:55.611685  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:56.111626  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:56.612345  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:57.112288  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:57.611627  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:58.111642  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:58.611765  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:59.111654  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:28:59.611631  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:00.111752  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:00.611532  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:01.111609  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:01.612431  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:02.112321  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:02.612573  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:03.111603  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:03.611623  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:04.111582  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:04.611688  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:05.111638  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:05.612430  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:06.112226  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:06.611605  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:07.112192  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:07.612398  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:08.112256  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:08.611676  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:09.111581  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:09.612565  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:10.112483  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:10.611871  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:11.111685  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:11.111812  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:11.139943  543793 cri.go:89] found id: ""
	I1212 21:29:11.139967  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.139976  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:11.139983  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:11.140053  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:11.173208  543793 cri.go:89] found id: ""
	I1212 21:29:11.173253  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.173264  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:11.173270  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:11.173333  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:11.200489  543793 cri.go:89] found id: ""
	I1212 21:29:11.200522  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.200532  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:11.200538  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:11.200597  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:11.226771  543793 cri.go:89] found id: ""
	I1212 21:29:11.226798  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.226806  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:11.226812  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:11.226878  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:11.255470  543793 cri.go:89] found id: ""
	I1212 21:29:11.255491  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.255500  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:11.255506  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:11.255563  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:11.285150  543793 cri.go:89] found id: ""
	I1212 21:29:11.285179  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.285188  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:11.285195  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:11.285249  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:11.315527  543793 cri.go:89] found id: ""
	I1212 21:29:11.315556  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.315565  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:11.315570  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:11.315634  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:11.342957  543793 cri.go:89] found id: ""
	I1212 21:29:11.342983  543793 logs.go:282] 0 containers: []
	W1212 21:29:11.342991  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:11.343001  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:11.343026  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:11.422136  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:11.422210  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:11.439577  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:11.439657  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:11.752729  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:11.752752  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:11.752778  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:11.790327  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:11.790367  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:14.320532  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:14.330118  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:14.330186  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:14.355757  543793 cri.go:89] found id: ""
	I1212 21:29:14.355782  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.355791  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:14.355797  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:14.355853  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:14.387618  543793 cri.go:89] found id: ""
	I1212 21:29:14.387652  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.387660  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:14.387666  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:14.387718  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:14.416487  543793 cri.go:89] found id: ""
	I1212 21:29:14.416508  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.416516  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:14.416522  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:14.416574  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:14.441229  543793 cri.go:89] found id: ""
	I1212 21:29:14.441251  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.441258  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:14.441264  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:14.441318  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:14.465730  543793 cri.go:89] found id: ""
	I1212 21:29:14.465805  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.465836  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:14.465844  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:14.465933  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:14.491737  543793 cri.go:89] found id: ""
	I1212 21:29:14.491770  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.491779  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:14.491785  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:14.491842  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:14.519365  543793 cri.go:89] found id: ""
	I1212 21:29:14.519388  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.519397  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:14.519402  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:14.519460  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:14.545093  543793 cri.go:89] found id: ""
	I1212 21:29:14.545119  543793 logs.go:282] 0 containers: []
	W1212 21:29:14.545128  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:14.545137  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:14.545171  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:14.575005  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:14.575043  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:14.606093  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:14.606121  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:14.675038  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:14.675077  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:14.691468  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:14.691496  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:14.779980  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:17.280212  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:17.290183  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:17.290254  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:17.314581  543793 cri.go:89] found id: ""
	I1212 21:29:17.314605  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.314613  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:17.314619  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:17.314676  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:17.338889  543793 cri.go:89] found id: ""
	I1212 21:29:17.338913  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.338921  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:17.338927  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:17.338983  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:17.364124  543793 cri.go:89] found id: ""
	I1212 21:29:17.364147  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.364157  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:17.364163  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:17.364230  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:17.390796  543793 cri.go:89] found id: ""
	I1212 21:29:17.390822  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.390831  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:17.390837  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:17.390892  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:17.416971  543793 cri.go:89] found id: ""
	I1212 21:29:17.416994  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.417002  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:17.417008  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:17.417078  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:17.442591  543793 cri.go:89] found id: ""
	I1212 21:29:17.442620  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.442629  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:17.442636  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:17.442693  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:17.467992  543793 cri.go:89] found id: ""
	I1212 21:29:17.468020  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.468029  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:17.468034  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:17.468143  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:17.493838  543793 cri.go:89] found id: ""
	I1212 21:29:17.493862  543793 logs.go:282] 0 containers: []
	W1212 21:29:17.493875  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:17.493902  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:17.493919  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:17.523123  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:17.523192  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:17.589737  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:17.589776  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:17.606416  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:17.606449  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:17.677656  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:17.677681  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:17.677695  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:20.210267  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:20.220334  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:20.220470  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:20.247176  543793 cri.go:89] found id: ""
	I1212 21:29:20.247238  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.247261  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:20.247280  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:20.247351  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:20.275529  543793 cri.go:89] found id: ""
	I1212 21:29:20.275566  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.275575  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:20.275583  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:20.275674  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:20.301901  543793 cri.go:89] found id: ""
	I1212 21:29:20.301963  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.301979  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:20.301985  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:20.302045  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:20.328308  543793 cri.go:89] found id: ""
	I1212 21:29:20.328331  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.328340  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:20.328347  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:20.328426  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:20.354197  543793 cri.go:89] found id: ""
	I1212 21:29:20.354222  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.354235  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:20.354242  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:20.354304  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:20.383047  543793 cri.go:89] found id: ""
	I1212 21:29:20.383071  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.383080  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:20.383086  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:20.383144  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:20.412431  543793 cri.go:89] found id: ""
	I1212 21:29:20.412455  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.412462  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:20.412468  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:20.412523  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:20.437724  543793 cri.go:89] found id: ""
	I1212 21:29:20.437751  543793 logs.go:282] 0 containers: []
	W1212 21:29:20.437760  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:20.437769  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:20.437781  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:20.503594  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:20.503631  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:20.519432  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:20.519461  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:20.582441  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:20.582462  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:20.582475  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:20.613730  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:20.613769  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:23.143400  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:23.153895  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:23.153968  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:23.180445  543793 cri.go:89] found id: ""
	I1212 21:29:23.180478  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.180489  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:23.180498  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:23.180559  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:23.207137  543793 cri.go:89] found id: ""
	I1212 21:29:23.207163  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.207172  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:23.207190  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:23.207250  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:23.236556  543793 cri.go:89] found id: ""
	I1212 21:29:23.236579  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.236589  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:23.236595  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:23.236659  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:23.269564  543793 cri.go:89] found id: ""
	I1212 21:29:23.269588  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.269596  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:23.269604  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:23.269663  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:23.295072  543793 cri.go:89] found id: ""
	I1212 21:29:23.295099  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.295108  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:23.295114  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:23.295173  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:23.320323  543793 cri.go:89] found id: ""
	I1212 21:29:23.320358  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.320397  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:23.320404  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:23.320473  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:23.346608  543793 cri.go:89] found id: ""
	I1212 21:29:23.346675  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.346700  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:23.346720  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:23.346809  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:23.374728  543793 cri.go:89] found id: ""
	I1212 21:29:23.374807  543793 logs.go:282] 0 containers: []
	W1212 21:29:23.374830  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:23.374856  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:23.374883  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:23.406089  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:23.406128  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:23.434962  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:23.434993  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:23.502257  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:23.502294  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:23.518665  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:23.518693  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:23.586066  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:26.086345  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:26.096767  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:26.096838  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:26.122706  543793 cri.go:89] found id: ""
	I1212 21:29:26.122732  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.122742  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:26.122748  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:26.122808  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:26.151243  543793 cri.go:89] found id: ""
	I1212 21:29:26.151269  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.151278  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:26.151284  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:26.151340  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:26.176434  543793 cri.go:89] found id: ""
	I1212 21:29:26.176461  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.176471  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:26.176477  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:26.176535  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:26.203103  543793 cri.go:89] found id: ""
	I1212 21:29:26.203132  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.203140  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:26.203146  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:26.203202  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:26.232074  543793 cri.go:89] found id: ""
	I1212 21:29:26.232105  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.232113  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:26.232120  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:26.232178  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:26.261381  543793 cri.go:89] found id: ""
	I1212 21:29:26.261411  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.261419  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:26.261426  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:26.261484  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:26.286741  543793 cri.go:89] found id: ""
	I1212 21:29:26.286777  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.286786  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:26.286792  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:26.286857  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:26.312721  543793 cri.go:89] found id: ""
	I1212 21:29:26.312747  543793 logs.go:282] 0 containers: []
	W1212 21:29:26.312756  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:26.312765  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:26.312776  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:26.382235  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:26.382272  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:26.398393  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:26.398424  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:26.488170  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:26.488193  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:26.488205  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:26.530467  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:26.530514  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:29.074435  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:29.085250  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:29.085312  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:29.111535  543793 cri.go:89] found id: ""
	I1212 21:29:29.111558  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.111567  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:29.111573  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:29.111628  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:29.138793  543793 cri.go:89] found id: ""
	I1212 21:29:29.138815  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.138824  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:29.138830  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:29.138889  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:29.169044  543793 cri.go:89] found id: ""
	I1212 21:29:29.169066  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.169074  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:29.169080  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:29.169138  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:29.197027  543793 cri.go:89] found id: ""
	I1212 21:29:29.197049  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.197058  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:29.197063  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:29.197122  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:29.242748  543793 cri.go:89] found id: ""
	I1212 21:29:29.242770  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.242779  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:29.242785  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:29.242891  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:29.277136  543793 cri.go:89] found id: ""
	I1212 21:29:29.277157  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.277165  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:29.277171  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:29.277229  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:29.310690  543793 cri.go:89] found id: ""
	I1212 21:29:29.310729  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.310739  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:29.310745  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:29.310814  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:29.342650  543793 cri.go:89] found id: ""
	I1212 21:29:29.342684  543793 logs.go:282] 0 containers: []
	W1212 21:29:29.342693  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:29.342702  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:29.342715  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:29.412104  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:29.412225  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:29.431138  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:29.431164  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:29.502608  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:29.502642  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:29.502655  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:29.540214  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:29.540251  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:32.078965  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:32.089092  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:32.089185  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:32.116111  543793 cri.go:89] found id: ""
	I1212 21:29:32.116133  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.116142  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:32.116148  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:32.116203  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:32.165156  543793 cri.go:89] found id: ""
	I1212 21:29:32.165179  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.165188  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:32.165194  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:32.165250  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:32.190366  543793 cri.go:89] found id: ""
	I1212 21:29:32.190388  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.190396  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:32.190402  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:32.190465  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:32.217999  543793 cri.go:89] found id: ""
	I1212 21:29:32.218026  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.218035  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:32.218041  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:32.218100  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:32.243959  543793 cri.go:89] found id: ""
	I1212 21:29:32.243986  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.243995  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:32.244001  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:32.244058  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:32.272281  543793 cri.go:89] found id: ""
	I1212 21:29:32.272347  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.272399  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:32.272424  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:32.272503  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:32.297960  543793 cri.go:89] found id: ""
	I1212 21:29:32.297996  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.298005  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:32.298011  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:32.298072  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:32.324278  543793 cri.go:89] found id: ""
	I1212 21:29:32.324349  543793 logs.go:282] 0 containers: []
	W1212 21:29:32.324388  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:32.324401  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:32.324413  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:32.396952  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:32.397023  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:32.397050  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:32.427247  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:32.427282  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:32.458231  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:32.458261  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:32.528407  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:32.528445  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:35.045650  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:35.055757  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:35.055853  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:35.083541  543793 cri.go:89] found id: ""
	I1212 21:29:35.083576  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.083585  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:35.083592  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:35.083659  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:35.113622  543793 cri.go:89] found id: ""
	I1212 21:29:35.113647  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.113655  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:35.113660  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:35.113720  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:35.147199  543793 cri.go:89] found id: ""
	I1212 21:29:35.147225  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.147234  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:35.147240  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:35.147302  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:35.180772  543793 cri.go:89] found id: ""
	I1212 21:29:35.180802  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.180812  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:35.180818  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:35.180888  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:35.206651  543793 cri.go:89] found id: ""
	I1212 21:29:35.206686  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.206694  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:35.206701  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:35.206768  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:35.231710  543793 cri.go:89] found id: ""
	I1212 21:29:35.231738  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.231747  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:35.231754  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:35.231819  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:35.257587  543793 cri.go:89] found id: ""
	I1212 21:29:35.257623  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.257633  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:35.257639  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:35.257706  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:35.284146  543793 cri.go:89] found id: ""
	I1212 21:29:35.284170  543793 logs.go:282] 0 containers: []
	W1212 21:29:35.284178  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:35.284187  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:35.284201  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:35.300302  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:35.300399  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:35.363361  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:35.363383  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:35.363398  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:35.393895  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:35.393932  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:35.421919  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:35.421948  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:37.990511  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:38.004250  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:38.004334  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:38.049066  543793 cri.go:89] found id: ""
	I1212 21:29:38.049090  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.049098  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:38.049104  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:38.049165  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:38.077548  543793 cri.go:89] found id: ""
	I1212 21:29:38.077581  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.077590  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:38.077599  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:38.077663  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:38.108058  543793 cri.go:89] found id: ""
	I1212 21:29:38.108138  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.108174  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:38.108197  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:38.108283  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:38.134787  543793 cri.go:89] found id: ""
	I1212 21:29:38.134811  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.134820  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:38.134826  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:38.134941  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:38.161412  543793 cri.go:89] found id: ""
	I1212 21:29:38.161436  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.161445  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:38.161451  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:38.161528  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:38.186798  543793 cri.go:89] found id: ""
	I1212 21:29:38.186831  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.186840  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:38.186848  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:38.186943  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:38.216853  543793 cri.go:89] found id: ""
	I1212 21:29:38.216882  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.216891  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:38.216897  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:38.216988  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:38.242568  543793 cri.go:89] found id: ""
	I1212 21:29:38.242636  543793 logs.go:282] 0 containers: []
	W1212 21:29:38.242658  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:38.242684  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:38.242703  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:38.312419  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:38.312459  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:38.328625  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:38.328656  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:38.394538  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:38.394559  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:38.394572  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:38.424444  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:38.424478  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:40.962265  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:40.973689  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:40.973757  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:41.001529  543793 cri.go:89] found id: ""
	I1212 21:29:41.001555  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.001563  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:41.001570  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:41.001637  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:41.034822  543793 cri.go:89] found id: ""
	I1212 21:29:41.034844  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.034852  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:41.034857  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:41.034913  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:41.062900  543793 cri.go:89] found id: ""
	I1212 21:29:41.062973  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.062996  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:41.063015  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:41.063100  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:41.092038  543793 cri.go:89] found id: ""
	I1212 21:29:41.092109  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.092132  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:41.092150  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:41.092236  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:41.121047  543793 cri.go:89] found id: ""
	I1212 21:29:41.121074  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.121083  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:41.121089  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:41.121150  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:41.146685  543793 cri.go:89] found id: ""
	I1212 21:29:41.146716  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.146726  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:41.146732  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:41.146789  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:41.175806  543793 cri.go:89] found id: ""
	I1212 21:29:41.175829  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.175838  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:41.175844  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:41.175900  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:41.204191  543793 cri.go:89] found id: ""
	I1212 21:29:41.204214  543793 logs.go:282] 0 containers: []
	W1212 21:29:41.204222  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:41.204232  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:41.204259  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:41.271904  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:41.271940  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:41.288803  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:41.288834  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:41.356536  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:41.356555  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:41.356569  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:41.388124  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:41.388161  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:43.923731  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:43.933769  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:43.933845  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:43.968818  543793 cri.go:89] found id: ""
	I1212 21:29:43.968847  543793 logs.go:282] 0 containers: []
	W1212 21:29:43.968856  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:43.968862  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:43.968919  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:43.998748  543793 cri.go:89] found id: ""
	I1212 21:29:43.998772  543793 logs.go:282] 0 containers: []
	W1212 21:29:43.998781  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:43.998787  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:43.998847  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:44.031124  543793 cri.go:89] found id: ""
	I1212 21:29:44.031152  543793 logs.go:282] 0 containers: []
	W1212 21:29:44.031162  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:44.031168  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:44.031226  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:44.061242  543793 cri.go:89] found id: ""
	I1212 21:29:44.061270  543793 logs.go:282] 0 containers: []
	W1212 21:29:44.061280  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:44.061286  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:44.061347  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:44.087904  543793 cri.go:89] found id: ""
	I1212 21:29:44.087927  543793 logs.go:282] 0 containers: []
	W1212 21:29:44.087935  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:44.087942  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:44.088001  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:44.119990  543793 cri.go:89] found id: ""
	I1212 21:29:44.120015  543793 logs.go:282] 0 containers: []
	W1212 21:29:44.120024  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:44.120030  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:44.120088  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:44.146201  543793 cri.go:89] found id: ""
	I1212 21:29:44.146224  543793 logs.go:282] 0 containers: []
	W1212 21:29:44.146232  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:44.146238  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:44.146295  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:44.172070  543793 cri.go:89] found id: ""
	I1212 21:29:44.172141  543793 logs.go:282] 0 containers: []
	W1212 21:29:44.172165  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:44.172188  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:44.172229  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:44.238825  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:44.238865  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:44.255965  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:44.255998  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:44.325420  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:44.325442  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:44.325456  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:44.356244  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:44.356280  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:46.888241  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:46.898574  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:46.898649  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:46.924306  543793 cri.go:89] found id: ""
	I1212 21:29:46.924331  543793 logs.go:282] 0 containers: []
	W1212 21:29:46.924339  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:46.924345  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:46.924440  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:46.951401  543793 cri.go:89] found id: ""
	I1212 21:29:46.951426  543793 logs.go:282] 0 containers: []
	W1212 21:29:46.951435  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:46.951441  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:46.951499  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:46.986779  543793 cri.go:89] found id: ""
	I1212 21:29:46.986807  543793 logs.go:282] 0 containers: []
	W1212 21:29:46.986816  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:46.986822  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:46.986894  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:47.017752  543793 cri.go:89] found id: ""
	I1212 21:29:47.017780  543793 logs.go:282] 0 containers: []
	W1212 21:29:47.017789  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:47.017795  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:47.017853  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:47.044423  543793 cri.go:89] found id: ""
	I1212 21:29:47.044450  543793 logs.go:282] 0 containers: []
	W1212 21:29:47.044459  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:47.044465  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:47.044533  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:47.070292  543793 cri.go:89] found id: ""
	I1212 21:29:47.070319  543793 logs.go:282] 0 containers: []
	W1212 21:29:47.070328  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:47.070334  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:47.070423  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:47.095854  543793 cri.go:89] found id: ""
	I1212 21:29:47.095881  543793 logs.go:282] 0 containers: []
	W1212 21:29:47.095889  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:47.095896  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:47.095954  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:47.122731  543793 cri.go:89] found id: ""
	I1212 21:29:47.122758  543793 logs.go:282] 0 containers: []
	W1212 21:29:47.122767  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:47.122784  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:47.122795  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:47.153222  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:47.153259  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:47.182036  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:47.182067  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:47.248653  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:47.248694  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:47.266239  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:47.266273  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:47.334828  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:49.835735  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:49.849430  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:49.849502  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:49.879421  543793 cri.go:89] found id: ""
	I1212 21:29:49.879448  543793 logs.go:282] 0 containers: []
	W1212 21:29:49.879458  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:49.879464  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:49.879522  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:49.915381  543793 cri.go:89] found id: ""
	I1212 21:29:49.915409  543793 logs.go:282] 0 containers: []
	W1212 21:29:49.915418  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:49.915425  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:49.915480  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:49.951995  543793 cri.go:89] found id: ""
	I1212 21:29:49.952024  543793 logs.go:282] 0 containers: []
	W1212 21:29:49.952032  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:49.952039  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:49.952112  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:50.042097  543793 cri.go:89] found id: ""
	I1212 21:29:50.042125  543793 logs.go:282] 0 containers: []
	W1212 21:29:50.042134  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:50.042150  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:50.042213  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:50.082167  543793 cri.go:89] found id: ""
	I1212 21:29:50.082194  543793 logs.go:282] 0 containers: []
	W1212 21:29:50.082204  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:50.082211  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:50.082276  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:50.109764  543793 cri.go:89] found id: ""
	I1212 21:29:50.109793  543793 logs.go:282] 0 containers: []
	W1212 21:29:50.109803  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:50.109809  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:50.109868  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:50.137089  543793 cri.go:89] found id: ""
	I1212 21:29:50.137117  543793 logs.go:282] 0 containers: []
	W1212 21:29:50.137126  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:50.137133  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:50.137190  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:50.163423  543793 cri.go:89] found id: ""
	I1212 21:29:50.163447  543793 logs.go:282] 0 containers: []
	W1212 21:29:50.163455  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:50.163463  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:50.163475  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:50.192282  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:50.192308  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:50.259638  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:50.259677  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:50.279692  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:50.279722  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:50.377148  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:50.377171  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:50.377184  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:52.916540  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:52.927777  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:52.927901  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:52.956061  543793 cri.go:89] found id: ""
	I1212 21:29:52.956090  543793 logs.go:282] 0 containers: []
	W1212 21:29:52.956099  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:52.956105  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:52.956163  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:52.986047  543793 cri.go:89] found id: ""
	I1212 21:29:52.986076  543793 logs.go:282] 0 containers: []
	W1212 21:29:52.986086  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:52.986092  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:52.986149  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:53.017050  543793 cri.go:89] found id: ""
	I1212 21:29:53.017078  543793 logs.go:282] 0 containers: []
	W1212 21:29:53.017088  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:53.017095  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:53.017153  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:53.049888  543793 cri.go:89] found id: ""
	I1212 21:29:53.049915  543793 logs.go:282] 0 containers: []
	W1212 21:29:53.049924  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:53.049931  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:53.049989  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:53.076355  543793 cri.go:89] found id: ""
	I1212 21:29:53.076404  543793 logs.go:282] 0 containers: []
	W1212 21:29:53.076414  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:53.076420  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:53.076486  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:53.102532  543793 cri.go:89] found id: ""
	I1212 21:29:53.102559  543793 logs.go:282] 0 containers: []
	W1212 21:29:53.102569  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:53.102575  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:53.102633  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:53.128178  543793 cri.go:89] found id: ""
	I1212 21:29:53.128206  543793 logs.go:282] 0 containers: []
	W1212 21:29:53.128215  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:53.128234  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:53.128294  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:53.154102  543793 cri.go:89] found id: ""
	I1212 21:29:53.154126  543793 logs.go:282] 0 containers: []
	W1212 21:29:53.154134  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:53.154143  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:53.154155  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:53.182409  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:53.182441  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:53.252971  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:53.253015  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:53.270098  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:53.270126  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:53.336875  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:53.336898  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:53.336912  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:55.867452  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:55.877588  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:55.877681  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:55.912531  543793 cri.go:89] found id: ""
	I1212 21:29:55.912558  543793 logs.go:282] 0 containers: []
	W1212 21:29:55.912567  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:55.912573  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:55.912633  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:55.943631  543793 cri.go:89] found id: ""
	I1212 21:29:55.943660  543793 logs.go:282] 0 containers: []
	W1212 21:29:55.943669  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:55.943675  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:55.943731  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:55.982873  543793 cri.go:89] found id: ""
	I1212 21:29:55.982896  543793 logs.go:282] 0 containers: []
	W1212 21:29:55.982905  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:55.982911  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:55.982968  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:56.023919  543793 cri.go:89] found id: ""
	I1212 21:29:56.023942  543793 logs.go:282] 0 containers: []
	W1212 21:29:56.023951  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:56.023957  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:56.024017  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:56.054204  543793 cri.go:89] found id: ""
	I1212 21:29:56.054228  543793 logs.go:282] 0 containers: []
	W1212 21:29:56.054237  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:56.054245  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:56.054306  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:56.080688  543793 cri.go:89] found id: ""
	I1212 21:29:56.080716  543793 logs.go:282] 0 containers: []
	W1212 21:29:56.080726  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:56.080732  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:56.080813  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:56.107485  543793 cri.go:89] found id: ""
	I1212 21:29:56.107512  543793 logs.go:282] 0 containers: []
	W1212 21:29:56.107521  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:56.107528  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:56.107592  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:56.134954  543793 cri.go:89] found id: ""
	I1212 21:29:56.134981  543793 logs.go:282] 0 containers: []
	W1212 21:29:56.134989  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:56.134999  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:56.135011  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:56.201882  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:56.201915  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:56.218092  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:56.218123  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:56.300668  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:56.300690  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:56.300703  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:56.340032  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:56.340110  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:29:58.887543  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:29:58.898033  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:29:58.898108  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:29:58.924876  543793 cri.go:89] found id: ""
	I1212 21:29:58.924901  543793 logs.go:282] 0 containers: []
	W1212 21:29:58.924910  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:29:58.924918  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:29:58.924979  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:29:58.951076  543793 cri.go:89] found id: ""
	I1212 21:29:58.951103  543793 logs.go:282] 0 containers: []
	W1212 21:29:58.951112  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:29:58.951118  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:29:58.951178  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:29:58.990487  543793 cri.go:89] found id: ""
	I1212 21:29:58.990516  543793 logs.go:282] 0 containers: []
	W1212 21:29:58.990526  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:29:58.990538  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:29:58.990599  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:29:59.024132  543793 cri.go:89] found id: ""
	I1212 21:29:59.024160  543793 logs.go:282] 0 containers: []
	W1212 21:29:59.024170  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:29:59.024176  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:29:59.024234  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:29:59.051196  543793 cri.go:89] found id: ""
	I1212 21:29:59.051218  543793 logs.go:282] 0 containers: []
	W1212 21:29:59.051226  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:29:59.051232  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:29:59.051306  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:29:59.078549  543793 cri.go:89] found id: ""
	I1212 21:29:59.078575  543793 logs.go:282] 0 containers: []
	W1212 21:29:59.078590  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:29:59.078597  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:29:59.078655  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:29:59.107355  543793 cri.go:89] found id: ""
	I1212 21:29:59.107379  543793 logs.go:282] 0 containers: []
	W1212 21:29:59.107389  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:29:59.107395  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:29:59.107453  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:29:59.133489  543793 cri.go:89] found id: ""
	I1212 21:29:59.133512  543793 logs.go:282] 0 containers: []
	W1212 21:29:59.133520  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:29:59.133528  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:29:59.133539  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:29:59.200332  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:29:59.200374  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:29:59.216006  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:29:59.216035  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:29:59.279406  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:29:59.279427  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:29:59.279450  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:29:59.310231  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:29:59.310266  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:01.841552  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:01.852816  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:01.852887  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:01.882058  543793 cri.go:89] found id: ""
	I1212 21:30:01.882086  543793 logs.go:282] 0 containers: []
	W1212 21:30:01.882095  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:01.882101  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:01.882162  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:01.909777  543793 cri.go:89] found id: ""
	I1212 21:30:01.909803  543793 logs.go:282] 0 containers: []
	W1212 21:30:01.909812  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:01.909818  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:01.909878  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:01.938275  543793 cri.go:89] found id: ""
	I1212 21:30:01.938304  543793 logs.go:282] 0 containers: []
	W1212 21:30:01.938314  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:01.938321  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:01.938403  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:01.975661  543793 cri.go:89] found id: ""
	I1212 21:30:01.975691  543793 logs.go:282] 0 containers: []
	W1212 21:30:01.975701  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:01.975725  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:01.975811  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:02.016315  543793 cri.go:89] found id: ""
	I1212 21:30:02.016409  543793 logs.go:282] 0 containers: []
	W1212 21:30:02.016435  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:02.016455  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:02.016545  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:02.045856  543793 cri.go:89] found id: ""
	I1212 21:30:02.045882  543793 logs.go:282] 0 containers: []
	W1212 21:30:02.045903  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:02.045926  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:02.046003  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:02.074677  543793 cri.go:89] found id: ""
	I1212 21:30:02.074742  543793 logs.go:282] 0 containers: []
	W1212 21:30:02.074756  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:02.074769  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:02.074829  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:02.105093  543793 cri.go:89] found id: ""
	I1212 21:30:02.105126  543793 logs.go:282] 0 containers: []
	W1212 21:30:02.105135  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:02.105145  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:02.105159  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:02.139222  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:02.139257  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:02.171185  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:02.171217  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:02.242117  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:02.242155  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:02.259904  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:02.259934  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:02.328758  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:04.829012  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:04.840615  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:04.840687  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:04.866442  543793 cri.go:89] found id: ""
	I1212 21:30:04.866472  543793 logs.go:282] 0 containers: []
	W1212 21:30:04.866481  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:04.866488  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:04.866572  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:04.892731  543793 cri.go:89] found id: ""
	I1212 21:30:04.892761  543793 logs.go:282] 0 containers: []
	W1212 21:30:04.892769  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:04.892775  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:04.892862  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:04.918745  543793 cri.go:89] found id: ""
	I1212 21:30:04.918772  543793 logs.go:282] 0 containers: []
	W1212 21:30:04.918780  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:04.918786  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:04.918846  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:04.948243  543793 cri.go:89] found id: ""
	I1212 21:30:04.948267  543793 logs.go:282] 0 containers: []
	W1212 21:30:04.948276  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:04.948282  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:04.948390  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:04.986704  543793 cri.go:89] found id: ""
	I1212 21:30:04.986734  543793 logs.go:282] 0 containers: []
	W1212 21:30:04.986744  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:04.986750  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:04.986811  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:05.049122  543793 cri.go:89] found id: ""
	I1212 21:30:05.049156  543793 logs.go:282] 0 containers: []
	W1212 21:30:05.049169  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:05.049176  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:05.049236  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:05.076704  543793 cri.go:89] found id: ""
	I1212 21:30:05.076799  543793 logs.go:282] 0 containers: []
	W1212 21:30:05.076823  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:05.076845  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:05.076949  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:05.107738  543793 cri.go:89] found id: ""
	I1212 21:30:05.107813  543793 logs.go:282] 0 containers: []
	W1212 21:30:05.107836  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:05.107859  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:05.107895  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:05.124694  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:05.124733  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:05.190535  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:05.190559  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:05.190572  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:05.222209  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:05.222242  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:05.255621  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:05.255651  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:07.824556  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:07.835357  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:07.835425  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:07.864111  543793 cri.go:89] found id: ""
	I1212 21:30:07.864136  543793 logs.go:282] 0 containers: []
	W1212 21:30:07.864145  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:07.864151  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:07.864209  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:07.890750  543793 cri.go:89] found id: ""
	I1212 21:30:07.890777  543793 logs.go:282] 0 containers: []
	W1212 21:30:07.890788  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:07.890794  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:07.890854  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:07.919969  543793 cri.go:89] found id: ""
	I1212 21:30:07.919996  543793 logs.go:282] 0 containers: []
	W1212 21:30:07.920005  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:07.920012  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:07.920068  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:07.945873  543793 cri.go:89] found id: ""
	I1212 21:30:07.945899  543793 logs.go:282] 0 containers: []
	W1212 21:30:07.945907  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:07.945914  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:07.945970  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:07.979530  543793 cri.go:89] found id: ""
	I1212 21:30:07.979559  543793 logs.go:282] 0 containers: []
	W1212 21:30:07.979568  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:07.979575  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:07.979638  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:08.018679  543793 cri.go:89] found id: ""
	I1212 21:30:08.018705  543793 logs.go:282] 0 containers: []
	W1212 21:30:08.018714  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:08.018721  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:08.018778  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:08.051883  543793 cri.go:89] found id: ""
	I1212 21:30:08.051913  543793 logs.go:282] 0 containers: []
	W1212 21:30:08.051922  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:08.051929  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:08.051997  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:08.080938  543793 cri.go:89] found id: ""
	I1212 21:30:08.081020  543793 logs.go:282] 0 containers: []
	W1212 21:30:08.081043  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:08.081060  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:08.081084  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:08.148508  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:08.148545  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:08.164640  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:08.164669  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:08.225834  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:08.225856  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:08.225872  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:08.256101  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:08.256138  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:10.786230  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:10.796567  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:10.796633  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:10.831391  543793 cri.go:89] found id: ""
	I1212 21:30:10.831415  543793 logs.go:282] 0 containers: []
	W1212 21:30:10.831423  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:10.831429  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:10.831490  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:10.863364  543793 cri.go:89] found id: ""
	I1212 21:30:10.863387  543793 logs.go:282] 0 containers: []
	W1212 21:30:10.863395  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:10.863401  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:10.863477  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:10.889595  543793 cri.go:89] found id: ""
	I1212 21:30:10.889619  543793 logs.go:282] 0 containers: []
	W1212 21:30:10.889628  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:10.889634  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:10.889694  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:10.915910  543793 cri.go:89] found id: ""
	I1212 21:30:10.915936  543793 logs.go:282] 0 containers: []
	W1212 21:30:10.915950  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:10.915957  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:10.916017  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:10.941664  543793 cri.go:89] found id: ""
	I1212 21:30:10.941689  543793 logs.go:282] 0 containers: []
	W1212 21:30:10.941698  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:10.941704  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:10.941792  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:10.979372  543793 cri.go:89] found id: ""
	I1212 21:30:10.979448  543793 logs.go:282] 0 containers: []
	W1212 21:30:10.979473  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:10.979492  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:10.979576  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:11.015046  543793 cri.go:89] found id: ""
	I1212 21:30:11.015125  543793 logs.go:282] 0 containers: []
	W1212 21:30:11.015149  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:11.015171  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:11.015255  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:11.051413  543793 cri.go:89] found id: ""
	I1212 21:30:11.051515  543793 logs.go:282] 0 containers: []
	W1212 21:30:11.051618  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:11.051648  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:11.051674  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:11.118849  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:11.118947  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:11.140340  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:11.140478  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:11.209318  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:11.209343  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:11.209357  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:11.240262  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:11.240297  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:13.770947  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:13.780993  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:13.781065  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:13.808200  543793 cri.go:89] found id: ""
	I1212 21:30:13.808229  543793 logs.go:282] 0 containers: []
	W1212 21:30:13.808238  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:13.808244  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:13.808307  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:13.841305  543793 cri.go:89] found id: ""
	I1212 21:30:13.841335  543793 logs.go:282] 0 containers: []
	W1212 21:30:13.841345  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:13.841352  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:13.841421  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:13.869097  543793 cri.go:89] found id: ""
	I1212 21:30:13.869121  543793 logs.go:282] 0 containers: []
	W1212 21:30:13.869130  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:13.869135  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:13.869194  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:13.899630  543793 cri.go:89] found id: ""
	I1212 21:30:13.899658  543793 logs.go:282] 0 containers: []
	W1212 21:30:13.899667  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:13.899673  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:13.899735  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:13.929699  543793 cri.go:89] found id: ""
	I1212 21:30:13.929778  543793 logs.go:282] 0 containers: []
	W1212 21:30:13.929802  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:13.929815  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:13.929885  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:13.958503  543793 cri.go:89] found id: ""
	I1212 21:30:13.958527  543793 logs.go:282] 0 containers: []
	W1212 21:30:13.958535  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:13.958542  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:13.958601  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:13.992656  543793 cri.go:89] found id: ""
	I1212 21:30:13.992679  543793 logs.go:282] 0 containers: []
	W1212 21:30:13.992688  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:13.992694  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:13.992760  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:14.039468  543793 cri.go:89] found id: ""
	I1212 21:30:14.039545  543793 logs.go:282] 0 containers: []
	W1212 21:30:14.039567  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:14.039590  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:14.039632  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:14.107013  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:14.107052  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:14.123577  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:14.123616  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:14.188175  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:14.188240  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:14.188268  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:14.218600  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:14.218636  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:16.750640  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:16.761216  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:16.761286  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:16.787629  543793 cri.go:89] found id: ""
	I1212 21:30:16.787657  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.787666  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:16.787672  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:16.787733  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:16.814812  543793 cri.go:89] found id: ""
	I1212 21:30:16.814838  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.814848  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:16.814854  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:16.814911  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:16.847673  543793 cri.go:89] found id: ""
	I1212 21:30:16.847702  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.847712  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:16.847720  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:16.847781  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:16.874984  543793 cri.go:89] found id: ""
	I1212 21:30:16.875013  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.875021  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:16.875027  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:16.875135  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:16.902391  543793 cri.go:89] found id: ""
	I1212 21:30:16.902414  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.902423  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:16.902429  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:16.902486  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:16.928984  543793 cri.go:89] found id: ""
	I1212 21:30:16.929008  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.929017  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:16.929024  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:16.929082  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:16.956894  543793 cri.go:89] found id: ""
	I1212 21:30:16.956921  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.956930  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:16.956936  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:16.957021  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:16.993024  543793 cri.go:89] found id: ""
	I1212 21:30:16.993052  543793 logs.go:282] 0 containers: []
	W1212 21:30:16.993061  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:16.993070  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:16.993080  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:17.076268  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:17.076309  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:17.092531  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:17.092563  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:17.161695  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:17.161720  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:17.161733  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:17.192514  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:17.192551  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:19.725096  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:19.736916  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:19.736997  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:19.765421  543793 cri.go:89] found id: ""
	I1212 21:30:19.765446  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.765456  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:19.765462  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:19.765525  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:19.794822  543793 cri.go:89] found id: ""
	I1212 21:30:19.794852  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.794861  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:19.794868  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:19.794928  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:19.823652  543793 cri.go:89] found id: ""
	I1212 21:30:19.823679  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.823688  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:19.823694  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:19.823758  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:19.857670  543793 cri.go:89] found id: ""
	I1212 21:30:19.857696  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.857705  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:19.857712  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:19.857771  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:19.888966  543793 cri.go:89] found id: ""
	I1212 21:30:19.888992  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.889001  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:19.889008  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:19.889068  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:19.921231  543793 cri.go:89] found id: ""
	I1212 21:30:19.921259  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.921268  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:19.921275  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:19.921337  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:19.948665  543793 cri.go:89] found id: ""
	I1212 21:30:19.948690  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.948699  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:19.948707  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:19.948769  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:19.978803  543793 cri.go:89] found id: ""
	I1212 21:30:19.978835  543793 logs.go:282] 0 containers: []
	W1212 21:30:19.978845  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:19.978855  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:19.978867  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:20.035115  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:20.035147  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:20.106724  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:20.106765  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:20.123979  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:20.124013  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:20.197162  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:20.197185  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:20.197197  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:22.728782  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:22.739018  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:22.739085  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:22.777297  543793 cri.go:89] found id: ""
	I1212 21:30:22.777320  543793 logs.go:282] 0 containers: []
	W1212 21:30:22.777329  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:22.777335  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:22.777391  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:22.809672  543793 cri.go:89] found id: ""
	I1212 21:30:22.809696  543793 logs.go:282] 0 containers: []
	W1212 21:30:22.809704  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:22.809710  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:22.809766  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:22.843022  543793 cri.go:89] found id: ""
	I1212 21:30:22.843046  543793 logs.go:282] 0 containers: []
	W1212 21:30:22.843055  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:22.843061  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:22.843117  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:22.886374  543793 cri.go:89] found id: ""
	I1212 21:30:22.886399  543793 logs.go:282] 0 containers: []
	W1212 21:30:22.886414  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:22.886421  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:22.886479  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:22.911860  543793 cri.go:89] found id: ""
	I1212 21:30:22.911885  543793 logs.go:282] 0 containers: []
	W1212 21:30:22.911894  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:22.911900  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:22.911968  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:22.937401  543793 cri.go:89] found id: ""
	I1212 21:30:22.937427  543793 logs.go:282] 0 containers: []
	W1212 21:30:22.937436  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:22.937442  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:22.937497  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:22.978660  543793 cri.go:89] found id: ""
	I1212 21:30:22.978684  543793 logs.go:282] 0 containers: []
	W1212 21:30:22.978693  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:22.978699  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:22.978758  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:23.016920  543793 cri.go:89] found id: ""
	I1212 21:30:23.016949  543793 logs.go:282] 0 containers: []
	W1212 21:30:23.016959  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:23.016969  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:23.016982  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:23.037533  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:23.037570  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:23.103071  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:23.103142  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:23.103162  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:23.134420  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:23.134456  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:23.167329  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:23.167356  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:25.734560  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:25.745772  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:25.745873  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:25.775329  543793 cri.go:89] found id: ""
	I1212 21:30:25.775363  543793 logs.go:282] 0 containers: []
	W1212 21:30:25.775373  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:25.775402  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:25.775478  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:25.804543  543793 cri.go:89] found id: ""
	I1212 21:30:25.804567  543793 logs.go:282] 0 containers: []
	W1212 21:30:25.804604  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:25.804611  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:25.804720  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:25.857655  543793 cri.go:89] found id: ""
	I1212 21:30:25.857721  543793 logs.go:282] 0 containers: []
	W1212 21:30:25.857745  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:25.857760  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:25.857838  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:25.904326  543793 cri.go:89] found id: ""
	I1212 21:30:25.904415  543793 logs.go:282] 0 containers: []
	W1212 21:30:25.904441  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:25.904461  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:25.904534  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:25.951541  543793 cri.go:89] found id: ""
	I1212 21:30:25.951564  543793 logs.go:282] 0 containers: []
	W1212 21:30:25.951573  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:25.951579  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:25.951636  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:26.009430  543793 cri.go:89] found id: ""
	I1212 21:30:26.009455  543793 logs.go:282] 0 containers: []
	W1212 21:30:26.009465  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:26.009471  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:26.009536  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:26.069258  543793 cri.go:89] found id: ""
	I1212 21:30:26.069281  543793 logs.go:282] 0 containers: []
	W1212 21:30:26.069289  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:26.069296  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:26.069355  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:26.109827  543793 cri.go:89] found id: ""
	I1212 21:30:26.109851  543793 logs.go:282] 0 containers: []
	W1212 21:30:26.109860  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:26.109869  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:26.109881  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:26.191341  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:26.191421  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:26.207668  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:26.207696  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:26.290290  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:26.290307  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:26.290319  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:26.324442  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:26.324481  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:28.858171  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:28.872950  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:28.873024  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:28.903800  543793 cri.go:89] found id: ""
	I1212 21:30:28.903829  543793 logs.go:282] 0 containers: []
	W1212 21:30:28.903839  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:28.903845  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:28.903905  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:28.938946  543793 cri.go:89] found id: ""
	I1212 21:30:28.938969  543793 logs.go:282] 0 containers: []
	W1212 21:30:28.938978  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:28.938984  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:28.939042  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:28.986630  543793 cri.go:89] found id: ""
	I1212 21:30:28.986655  543793 logs.go:282] 0 containers: []
	W1212 21:30:28.986664  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:28.986670  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:28.986730  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:29.036490  543793 cri.go:89] found id: ""
	I1212 21:30:29.036513  543793 logs.go:282] 0 containers: []
	W1212 21:30:29.036521  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:29.036527  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:29.036581  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:29.077645  543793 cri.go:89] found id: ""
	I1212 21:30:29.077682  543793 logs.go:282] 0 containers: []
	W1212 21:30:29.077691  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:29.077697  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:29.077823  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:29.121667  543793 cri.go:89] found id: ""
	I1212 21:30:29.121690  543793 logs.go:282] 0 containers: []
	W1212 21:30:29.121698  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:29.121705  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:29.121761  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:29.159783  543793 cri.go:89] found id: ""
	I1212 21:30:29.159803  543793 logs.go:282] 0 containers: []
	W1212 21:30:29.159811  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:29.159827  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:29.159882  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:29.189828  543793 cri.go:89] found id: ""
	I1212 21:30:29.189851  543793 logs.go:282] 0 containers: []
	W1212 21:30:29.189860  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:29.189868  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:29.189879  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:29.208924  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:29.208998  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:29.300074  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:29.300136  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:29.300163  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:29.332599  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:29.332642  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:29.368520  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:29.368550  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:31.953420  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:31.964593  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:31.964662  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:31.991852  543793 cri.go:89] found id: ""
	I1212 21:30:31.991877  543793 logs.go:282] 0 containers: []
	W1212 21:30:31.991885  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:31.991891  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:31.991953  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:32.027041  543793 cri.go:89] found id: ""
	I1212 21:30:32.027064  543793 logs.go:282] 0 containers: []
	W1212 21:30:32.027072  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:32.027078  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:32.027140  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:32.053809  543793 cri.go:89] found id: ""
	I1212 21:30:32.053832  543793 logs.go:282] 0 containers: []
	W1212 21:30:32.053841  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:32.053847  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:32.053905  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:32.079033  543793 cri.go:89] found id: ""
	I1212 21:30:32.079060  543793 logs.go:282] 0 containers: []
	W1212 21:30:32.079069  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:32.079075  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:32.079131  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:32.107523  543793 cri.go:89] found id: ""
	I1212 21:30:32.107549  543793 logs.go:282] 0 containers: []
	W1212 21:30:32.107558  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:32.107564  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:32.107621  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:32.143937  543793 cri.go:89] found id: ""
	I1212 21:30:32.143967  543793 logs.go:282] 0 containers: []
	W1212 21:30:32.143975  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:32.143983  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:32.144049  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:32.169669  543793 cri.go:89] found id: ""
	I1212 21:30:32.169695  543793 logs.go:282] 0 containers: []
	W1212 21:30:32.169704  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:32.169710  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:32.169766  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:32.195008  543793 cri.go:89] found id: ""
	I1212 21:30:32.195031  543793 logs.go:282] 0 containers: []
	W1212 21:30:32.195039  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:32.195048  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:32.195059  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:32.268614  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:32.268696  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:32.286152  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:32.286238  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:32.374740  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:32.374803  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:32.374831  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:32.409229  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:32.409261  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:34.965077  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:34.977854  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:34.977921  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:35.021320  543793 cri.go:89] found id: ""
	I1212 21:30:35.021345  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.021354  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:35.021360  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:35.021425  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:35.054383  543793 cri.go:89] found id: ""
	I1212 21:30:35.054446  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.054456  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:35.054463  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:35.054531  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:35.084288  543793 cri.go:89] found id: ""
	I1212 21:30:35.084317  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.084326  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:35.084333  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:35.084424  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:35.117455  543793 cri.go:89] found id: ""
	I1212 21:30:35.117487  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.117503  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:35.117509  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:35.117580  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:35.152061  543793 cri.go:89] found id: ""
	I1212 21:30:35.152087  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.152097  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:35.152103  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:35.152165  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:35.179921  543793 cri.go:89] found id: ""
	I1212 21:30:35.179951  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.179962  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:35.179969  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:35.180057  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:35.210488  543793 cri.go:89] found id: ""
	I1212 21:30:35.210523  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.210533  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:35.210541  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:35.210611  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:35.242540  543793 cri.go:89] found id: ""
	I1212 21:30:35.242563  543793 logs.go:282] 0 containers: []
	W1212 21:30:35.242572  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:35.242581  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:35.242593  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:35.311262  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:35.311302  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:35.328176  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:35.328206  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:35.399992  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:35.400015  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:35.400028  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:35.430666  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:35.430703  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:37.960513  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:37.973845  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:37.973920  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:38.015650  543793 cri.go:89] found id: ""
	I1212 21:30:38.015687  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.015704  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:38.015711  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:38.015778  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:38.047533  543793 cri.go:89] found id: ""
	I1212 21:30:38.047562  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.047571  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:38.047577  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:38.047638  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:38.075812  543793 cri.go:89] found id: ""
	I1212 21:30:38.075862  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.075872  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:38.075878  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:38.075941  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:38.108856  543793 cri.go:89] found id: ""
	I1212 21:30:38.108889  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.108899  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:38.108906  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:38.109004  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:38.137731  543793 cri.go:89] found id: ""
	I1212 21:30:38.137756  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.137764  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:38.137771  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:38.137829  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:38.163475  543793 cri.go:89] found id: ""
	I1212 21:30:38.163500  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.163509  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:38.163515  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:38.163575  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:38.188848  543793 cri.go:89] found id: ""
	I1212 21:30:38.188873  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.188881  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:38.188887  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:38.188959  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:38.214625  543793 cri.go:89] found id: ""
	I1212 21:30:38.214652  543793 logs.go:282] 0 containers: []
	W1212 21:30:38.214661  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:38.214670  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:38.214700  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:38.282251  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:38.282271  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:38.282284  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:38.313044  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:38.313081  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:38.341712  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:38.341742  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:38.409456  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:38.409494  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:40.926354  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:40.937067  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:40.937142  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:40.972882  543793 cri.go:89] found id: ""
	I1212 21:30:40.972907  543793 logs.go:282] 0 containers: []
	W1212 21:30:40.972916  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:40.972922  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:40.972981  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:41.003522  543793 cri.go:89] found id: ""
	I1212 21:30:41.003551  543793 logs.go:282] 0 containers: []
	W1212 21:30:41.003561  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:41.003567  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:41.003639  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:41.034408  543793 cri.go:89] found id: ""
	I1212 21:30:41.034434  543793 logs.go:282] 0 containers: []
	W1212 21:30:41.034443  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:41.034449  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:41.034506  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:41.060925  543793 cri.go:89] found id: ""
	I1212 21:30:41.060956  543793 logs.go:282] 0 containers: []
	W1212 21:30:41.060965  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:41.060971  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:41.061031  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:41.090870  543793 cri.go:89] found id: ""
	I1212 21:30:41.090896  543793 logs.go:282] 0 containers: []
	W1212 21:30:41.090904  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:41.090910  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:41.090970  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:41.121300  543793 cri.go:89] found id: ""
	I1212 21:30:41.121329  543793 logs.go:282] 0 containers: []
	W1212 21:30:41.121338  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:41.121345  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:41.121408  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:41.147977  543793 cri.go:89] found id: ""
	I1212 21:30:41.148005  543793 logs.go:282] 0 containers: []
	W1212 21:30:41.148013  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:41.148019  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:41.148079  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:41.180104  543793 cri.go:89] found id: ""
	I1212 21:30:41.180130  543793 logs.go:282] 0 containers: []
	W1212 21:30:41.180145  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:41.180155  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:41.180174  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:41.250753  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:41.250792  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:41.267924  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:41.267998  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:41.339262  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:41.339288  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:41.339310  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:41.371243  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:41.371280  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:43.901356  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:43.911467  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:43.911539  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:43.936540  543793 cri.go:89] found id: ""
	I1212 21:30:43.936567  543793 logs.go:282] 0 containers: []
	W1212 21:30:43.936576  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:43.936582  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:43.936641  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:43.967053  543793 cri.go:89] found id: ""
	I1212 21:30:43.967084  543793 logs.go:282] 0 containers: []
	W1212 21:30:43.967093  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:43.967100  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:43.967161  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:43.997052  543793 cri.go:89] found id: ""
	I1212 21:30:43.997081  543793 logs.go:282] 0 containers: []
	W1212 21:30:43.997091  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:43.997097  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:43.997159  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:44.032605  543793 cri.go:89] found id: ""
	I1212 21:30:44.032676  543793 logs.go:282] 0 containers: []
	W1212 21:30:44.032691  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:44.032698  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:44.032759  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:44.059084  543793 cri.go:89] found id: ""
	I1212 21:30:44.059109  543793 logs.go:282] 0 containers: []
	W1212 21:30:44.059117  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:44.059129  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:44.059189  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:44.085391  543793 cri.go:89] found id: ""
	I1212 21:30:44.085415  543793 logs.go:282] 0 containers: []
	W1212 21:30:44.085423  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:44.085429  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:44.085487  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:44.111256  543793 cri.go:89] found id: ""
	I1212 21:30:44.111280  543793 logs.go:282] 0 containers: []
	W1212 21:30:44.111289  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:44.111294  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:44.111354  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:44.139433  543793 cri.go:89] found id: ""
	I1212 21:30:44.139514  543793 logs.go:282] 0 containers: []
	W1212 21:30:44.139539  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:44.139560  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:44.139603  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:44.206543  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:44.206580  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:44.223317  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:44.223345  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:44.292855  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:44.292875  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:44.292887  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:44.323834  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:44.323866  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:46.852049  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:46.862319  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:46.862389  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:46.889030  543793 cri.go:89] found id: ""
	I1212 21:30:46.889056  543793 logs.go:282] 0 containers: []
	W1212 21:30:46.889064  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:46.889070  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:46.889128  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:46.917784  543793 cri.go:89] found id: ""
	I1212 21:30:46.917810  543793 logs.go:282] 0 containers: []
	W1212 21:30:46.917818  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:46.917825  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:46.917912  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:46.948930  543793 cri.go:89] found id: ""
	I1212 21:30:46.948958  543793 logs.go:282] 0 containers: []
	W1212 21:30:46.948967  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:46.948973  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:46.949037  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:46.978639  543793 cri.go:89] found id: ""
	I1212 21:30:46.978679  543793 logs.go:282] 0 containers: []
	W1212 21:30:46.978689  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:46.978696  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:46.978770  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:47.016702  543793 cri.go:89] found id: ""
	I1212 21:30:47.016780  543793 logs.go:282] 0 containers: []
	W1212 21:30:47.016816  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:47.016840  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:47.016929  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:47.045555  543793 cri.go:89] found id: ""
	I1212 21:30:47.045582  543793 logs.go:282] 0 containers: []
	W1212 21:30:47.045603  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:47.045611  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:47.045671  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:47.072899  543793 cri.go:89] found id: ""
	I1212 21:30:47.072926  543793 logs.go:282] 0 containers: []
	W1212 21:30:47.072935  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:47.072942  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:47.073003  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:47.101776  543793 cri.go:89] found id: ""
	I1212 21:30:47.101802  543793 logs.go:282] 0 containers: []
	W1212 21:30:47.101810  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:47.101819  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:47.101852  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:47.170764  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:47.170798  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:47.170814  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:47.201697  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:47.201734  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:47.233337  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:47.233406  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:47.300662  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:47.300703  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:49.818129  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:49.829117  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:49.829195  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:49.855279  543793 cri.go:89] found id: ""
	I1212 21:30:49.855305  543793 logs.go:282] 0 containers: []
	W1212 21:30:49.855313  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:49.855320  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:49.855377  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:49.880998  543793 cri.go:89] found id: ""
	I1212 21:30:49.881024  543793 logs.go:282] 0 containers: []
	W1212 21:30:49.881034  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:49.881040  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:49.881098  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:49.907141  543793 cri.go:89] found id: ""
	I1212 21:30:49.907209  543793 logs.go:282] 0 containers: []
	W1212 21:30:49.907234  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:49.907253  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:49.907346  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:49.933795  543793 cri.go:89] found id: ""
	I1212 21:30:49.933871  543793 logs.go:282] 0 containers: []
	W1212 21:30:49.933892  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:49.933900  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:49.933974  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:49.969530  543793 cri.go:89] found id: ""
	I1212 21:30:49.969552  543793 logs.go:282] 0 containers: []
	W1212 21:30:49.969561  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:49.969567  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:49.969633  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:50.014000  543793 cri.go:89] found id: ""
	I1212 21:30:50.014028  543793 logs.go:282] 0 containers: []
	W1212 21:30:50.014038  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:50.014045  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:50.014117  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:50.047141  543793 cri.go:89] found id: ""
	I1212 21:30:50.047221  543793 logs.go:282] 0 containers: []
	W1212 21:30:50.047246  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:50.047268  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:50.047355  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:50.078239  543793 cri.go:89] found id: ""
	I1212 21:30:50.078320  543793 logs.go:282] 0 containers: []
	W1212 21:30:50.078346  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:50.078361  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:50.078374  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:50.145366  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:50.145406  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:50.161977  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:50.162007  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:50.226627  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:50.226647  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:50.226660  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:50.257987  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:50.258022  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:52.804197  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:52.815671  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:52.815747  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:52.848271  543793 cri.go:89] found id: ""
	I1212 21:30:52.848308  543793 logs.go:282] 0 containers: []
	W1212 21:30:52.848318  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:52.848324  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:52.848423  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:52.890965  543793 cri.go:89] found id: ""
	I1212 21:30:52.890992  543793 logs.go:282] 0 containers: []
	W1212 21:30:52.891005  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:52.891016  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:52.891091  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:52.922763  543793 cri.go:89] found id: ""
	I1212 21:30:52.922797  543793 logs.go:282] 0 containers: []
	W1212 21:30:52.922807  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:52.922813  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:52.922874  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:52.961636  543793 cri.go:89] found id: ""
	I1212 21:30:52.961666  543793 logs.go:282] 0 containers: []
	W1212 21:30:52.961674  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:52.961680  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:52.961737  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:53.046398  543793 cri.go:89] found id: ""
	I1212 21:30:53.046427  543793 logs.go:282] 0 containers: []
	W1212 21:30:53.046435  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:53.046442  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:53.046513  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:53.094191  543793 cri.go:89] found id: ""
	I1212 21:30:53.094221  543793 logs.go:282] 0 containers: []
	W1212 21:30:53.094230  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:53.094236  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:53.094295  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:53.125540  543793 cri.go:89] found id: ""
	I1212 21:30:53.125568  543793 logs.go:282] 0 containers: []
	W1212 21:30:53.125577  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:53.125583  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:53.125643  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:53.151801  543793 cri.go:89] found id: ""
	I1212 21:30:53.151829  543793 logs.go:282] 0 containers: []
	W1212 21:30:53.151846  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:53.151855  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:53.151867  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:53.168004  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:53.168036  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:53.234159  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:53.234180  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:53.234192  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:53.266042  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:53.266079  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:53.297561  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:53.297590  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:55.867468  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:55.878028  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:55.878121  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:55.905142  543793 cri.go:89] found id: ""
	I1212 21:30:55.905169  543793 logs.go:282] 0 containers: []
	W1212 21:30:55.905177  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:55.905185  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:55.905243  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:55.930199  543793 cri.go:89] found id: ""
	I1212 21:30:55.930224  543793 logs.go:282] 0 containers: []
	W1212 21:30:55.930232  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:55.930239  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:55.930295  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:55.971470  543793 cri.go:89] found id: ""
	I1212 21:30:55.971496  543793 logs.go:282] 0 containers: []
	W1212 21:30:55.971505  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:55.971511  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:55.971567  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:56.008757  543793 cri.go:89] found id: ""
	I1212 21:30:56.008787  543793 logs.go:282] 0 containers: []
	W1212 21:30:56.008797  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:56.008803  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:56.008872  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:56.045649  543793 cri.go:89] found id: ""
	I1212 21:30:56.045698  543793 logs.go:282] 0 containers: []
	W1212 21:30:56.045706  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:56.045713  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:56.045773  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:56.076404  543793 cri.go:89] found id: ""
	I1212 21:30:56.076434  543793 logs.go:282] 0 containers: []
	W1212 21:30:56.076443  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:56.076450  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:56.076507  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:56.101760  543793 cri.go:89] found id: ""
	I1212 21:30:56.101838  543793 logs.go:282] 0 containers: []
	W1212 21:30:56.101861  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:56.101883  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:56.101972  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:56.127350  543793 cri.go:89] found id: ""
	I1212 21:30:56.127417  543793 logs.go:282] 0 containers: []
	W1212 21:30:56.127440  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:56.127463  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:56.127501  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:56.158262  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:56.158294  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:56.186737  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:56.186763  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:56.254202  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:56.254240  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:30:56.270455  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:56.270481  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:56.341020  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:58.841241  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:30:58.851485  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:30:58.851559  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:30:58.878346  543793 cri.go:89] found id: ""
	I1212 21:30:58.878371  543793 logs.go:282] 0 containers: []
	W1212 21:30:58.878379  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:30:58.878386  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:30:58.878450  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:30:58.904269  543793 cri.go:89] found id: ""
	I1212 21:30:58.904294  543793 logs.go:282] 0 containers: []
	W1212 21:30:58.904302  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:30:58.904309  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:30:58.904390  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:30:58.929755  543793 cri.go:89] found id: ""
	I1212 21:30:58.929828  543793 logs.go:282] 0 containers: []
	W1212 21:30:58.929858  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:30:58.929876  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:30:58.929959  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:30:58.961648  543793 cri.go:89] found id: ""
	I1212 21:30:58.961671  543793 logs.go:282] 0 containers: []
	W1212 21:30:58.961679  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:30:58.961685  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:30:58.961742  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:30:58.993800  543793 cri.go:89] found id: ""
	I1212 21:30:58.993822  543793 logs.go:282] 0 containers: []
	W1212 21:30:58.993831  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:30:58.993838  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:30:58.993901  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:30:59.025731  543793 cri.go:89] found id: ""
	I1212 21:30:59.025754  543793 logs.go:282] 0 containers: []
	W1212 21:30:59.025763  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:30:59.025769  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:30:59.025825  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:30:59.051238  543793 cri.go:89] found id: ""
	I1212 21:30:59.051260  543793 logs.go:282] 0 containers: []
	W1212 21:30:59.051269  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:30:59.051274  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:30:59.051337  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:30:59.077697  543793 cri.go:89] found id: ""
	I1212 21:30:59.077724  543793 logs.go:282] 0 containers: []
	W1212 21:30:59.077733  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:30:59.077743  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:30:59.077755  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:30:59.143240  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:30:59.143261  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:30:59.143279  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:30:59.173915  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:30:59.173946  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:30:59.202602  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:30:59.202630  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:30:59.272559  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:30:59.272639  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:01.793272  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:01.803274  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:01.803364  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:01.830748  543793 cri.go:89] found id: ""
	I1212 21:31:01.830771  543793 logs.go:282] 0 containers: []
	W1212 21:31:01.830779  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:01.830786  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:01.830843  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:01.862650  543793 cri.go:89] found id: ""
	I1212 21:31:01.862678  543793 logs.go:282] 0 containers: []
	W1212 21:31:01.862686  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:01.862692  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:01.862749  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:01.888287  543793 cri.go:89] found id: ""
	I1212 21:31:01.888310  543793 logs.go:282] 0 containers: []
	W1212 21:31:01.888318  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:01.888324  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:01.888409  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:01.917927  543793 cri.go:89] found id: ""
	I1212 21:31:01.917954  543793 logs.go:282] 0 containers: []
	W1212 21:31:01.917963  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:01.917969  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:01.918027  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:01.948440  543793 cri.go:89] found id: ""
	I1212 21:31:01.948464  543793 logs.go:282] 0 containers: []
	W1212 21:31:01.948473  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:01.948479  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:01.948539  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:01.983044  543793 cri.go:89] found id: ""
	I1212 21:31:01.983069  543793 logs.go:282] 0 containers: []
	W1212 21:31:01.983078  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:01.983084  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:01.983148  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:02.020528  543793 cri.go:89] found id: ""
	I1212 21:31:02.020599  543793 logs.go:282] 0 containers: []
	W1212 21:31:02.020623  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:02.020643  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:02.020732  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:02.051210  543793 cri.go:89] found id: ""
	I1212 21:31:02.051233  543793 logs.go:282] 0 containers: []
	W1212 21:31:02.051241  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:02.051250  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:02.051262  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:02.125511  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:02.125552  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:02.143096  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:02.143126  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:02.208669  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:02.208691  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:02.208705  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:02.244619  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:02.244658  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:04.782027  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:04.792231  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:04.792305  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:04.818087  543793 cri.go:89] found id: ""
	I1212 21:31:04.818115  543793 logs.go:282] 0 containers: []
	W1212 21:31:04.818124  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:04.818130  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:04.818188  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:04.844437  543793 cri.go:89] found id: ""
	I1212 21:31:04.844460  543793 logs.go:282] 0 containers: []
	W1212 21:31:04.844469  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:04.844476  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:04.844534  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:04.869681  543793 cri.go:89] found id: ""
	I1212 21:31:04.869708  543793 logs.go:282] 0 containers: []
	W1212 21:31:04.869716  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:04.869723  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:04.869783  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:04.899155  543793 cri.go:89] found id: ""
	I1212 21:31:04.899186  543793 logs.go:282] 0 containers: []
	W1212 21:31:04.899195  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:04.899201  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:04.899258  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:04.924179  543793 cri.go:89] found id: ""
	I1212 21:31:04.924204  543793 logs.go:282] 0 containers: []
	W1212 21:31:04.924213  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:04.924219  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:04.924280  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:04.949848  543793 cri.go:89] found id: ""
	I1212 21:31:04.949870  543793 logs.go:282] 0 containers: []
	W1212 21:31:04.949879  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:04.949891  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:04.949947  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:04.987588  543793 cri.go:89] found id: ""
	I1212 21:31:04.987617  543793 logs.go:282] 0 containers: []
	W1212 21:31:04.987626  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:04.987632  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:04.987689  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:05.032117  543793 cri.go:89] found id: ""
	I1212 21:31:05.032144  543793 logs.go:282] 0 containers: []
	W1212 21:31:05.032154  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:05.032163  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:05.032182  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:05.099350  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:05.099393  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:05.116109  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:05.116137  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:05.185041  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:05.185063  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:05.185076  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:05.217485  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:05.217523  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:07.747047  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:07.757598  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:07.757669  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:07.785537  543793 cri.go:89] found id: ""
	I1212 21:31:07.785564  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.785573  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:07.785579  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:07.785636  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:07.812653  543793 cri.go:89] found id: ""
	I1212 21:31:07.812677  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.812686  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:07.812692  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:07.812749  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:07.840091  543793 cri.go:89] found id: ""
	I1212 21:31:07.840117  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.840126  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:07.840132  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:07.840189  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:07.866988  543793 cri.go:89] found id: ""
	I1212 21:31:07.867013  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.867022  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:07.867028  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:07.867086  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:07.893479  543793 cri.go:89] found id: ""
	I1212 21:31:07.893508  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.893517  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:07.893524  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:07.893581  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:07.919608  543793 cri.go:89] found id: ""
	I1212 21:31:07.919636  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.919645  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:07.919651  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:07.919710  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:07.945931  543793 cri.go:89] found id: ""
	I1212 21:31:07.945957  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.945966  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:07.945972  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:07.946030  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:07.976952  543793 cri.go:89] found id: ""
	I1212 21:31:07.977029  543793 logs.go:282] 0 containers: []
	W1212 21:31:07.977053  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:07.977076  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:07.977113  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:08.029305  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:08.029332  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:08.103184  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:08.103223  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:08.120786  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:08.120870  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:08.184439  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:08.184462  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:08.184477  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:10.716205  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:10.726727  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:10.726812  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:10.754250  543793 cri.go:89] found id: ""
	I1212 21:31:10.754276  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.754285  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:10.754291  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:10.754348  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:10.780412  543793 cri.go:89] found id: ""
	I1212 21:31:10.780439  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.780448  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:10.780454  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:10.780515  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:10.807291  543793 cri.go:89] found id: ""
	I1212 21:31:10.807318  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.807327  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:10.807333  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:10.807390  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:10.833233  543793 cri.go:89] found id: ""
	I1212 21:31:10.833258  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.833267  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:10.833273  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:10.833337  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:10.859957  543793 cri.go:89] found id: ""
	I1212 21:31:10.859981  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.859990  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:10.859996  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:10.860079  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:10.888964  543793 cri.go:89] found id: ""
	I1212 21:31:10.888990  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.888999  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:10.889006  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:10.889070  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:10.919329  543793 cri.go:89] found id: ""
	I1212 21:31:10.919355  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.919364  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:10.919370  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:10.919426  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:10.944631  543793 cri.go:89] found id: ""
	I1212 21:31:10.944655  543793 logs.go:282] 0 containers: []
	W1212 21:31:10.944664  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:10.944673  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:10.944686  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:10.979842  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:10.979898  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:11.059728  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:11.059770  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:11.076775  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:11.076809  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:11.148390  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:11.148413  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:11.148427  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:13.682934  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:13.692959  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:13.693028  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:13.720016  543793 cri.go:89] found id: ""
	I1212 21:31:13.720043  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.720060  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:13.720066  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:13.720127  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:13.749983  543793 cri.go:89] found id: ""
	I1212 21:31:13.750010  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.750019  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:13.750025  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:13.750082  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:13.776476  543793 cri.go:89] found id: ""
	I1212 21:31:13.776504  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.776513  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:13.776519  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:13.776583  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:13.805052  543793 cri.go:89] found id: ""
	I1212 21:31:13.805077  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.805085  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:13.805092  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:13.805147  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:13.831294  543793 cri.go:89] found id: ""
	I1212 21:31:13.831318  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.831326  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:13.831331  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:13.831391  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:13.859253  543793 cri.go:89] found id: ""
	I1212 21:31:13.859279  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.859288  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:13.859293  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:13.859349  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:13.885763  543793 cri.go:89] found id: ""
	I1212 21:31:13.885843  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.885866  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:13.885881  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:13.885956  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:13.913607  543793 cri.go:89] found id: ""
	I1212 21:31:13.913633  543793 logs.go:282] 0 containers: []
	W1212 21:31:13.913642  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:13.913651  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:13.913663  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:13.929402  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:13.929430  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:14.017561  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:14.017627  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:14.017649  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:14.049779  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:14.049829  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:14.080074  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:14.080152  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:16.650859  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:16.663448  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:16.663527  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:16.704528  543793 cri.go:89] found id: ""
	I1212 21:31:16.704695  543793 logs.go:282] 0 containers: []
	W1212 21:31:16.704712  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:16.704720  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:16.704781  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:16.743944  543793 cri.go:89] found id: ""
	I1212 21:31:16.743976  543793 logs.go:282] 0 containers: []
	W1212 21:31:16.743985  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:16.743991  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:16.744055  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:16.787430  543793 cri.go:89] found id: ""
	I1212 21:31:16.787461  543793 logs.go:282] 0 containers: []
	W1212 21:31:16.787470  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:16.787476  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:16.787537  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:16.837719  543793 cri.go:89] found id: ""
	I1212 21:31:16.837754  543793 logs.go:282] 0 containers: []
	W1212 21:31:16.837766  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:16.837773  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:16.837847  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:16.886926  543793 cri.go:89] found id: ""
	I1212 21:31:16.886959  543793 logs.go:282] 0 containers: []
	W1212 21:31:16.886968  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:16.886974  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:16.887045  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:16.926617  543793 cri.go:89] found id: ""
	I1212 21:31:16.926648  543793 logs.go:282] 0 containers: []
	W1212 21:31:16.926657  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:16.926663  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:16.926720  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:16.965935  543793 cri.go:89] found id: ""
	I1212 21:31:16.965964  543793 logs.go:282] 0 containers: []
	W1212 21:31:16.965974  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:16.965980  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:16.966040  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:17.006609  543793 cri.go:89] found id: ""
	I1212 21:31:17.006640  543793 logs.go:282] 0 containers: []
	W1212 21:31:17.006649  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:17.006660  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:17.006672  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:17.073093  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:17.073116  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:17.073129  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:17.103744  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:17.103783  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:17.137131  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:17.137159  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:17.207436  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:17.207472  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:19.723703  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:19.743033  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:19.743141  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:19.803200  543793 cri.go:89] found id: ""
	I1212 21:31:19.803229  543793 logs.go:282] 0 containers: []
	W1212 21:31:19.803238  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:19.803253  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:19.803320  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:19.851960  543793 cri.go:89] found id: ""
	I1212 21:31:19.851988  543793 logs.go:282] 0 containers: []
	W1212 21:31:19.852002  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:19.852009  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:19.852068  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:19.886034  543793 cri.go:89] found id: ""
	I1212 21:31:19.886062  543793 logs.go:282] 0 containers: []
	W1212 21:31:19.886078  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:19.886085  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:19.886197  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:19.928286  543793 cri.go:89] found id: ""
	I1212 21:31:19.928313  543793 logs.go:282] 0 containers: []
	W1212 21:31:19.928322  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:19.928328  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:19.928409  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:19.988156  543793 cri.go:89] found id: ""
	I1212 21:31:19.988192  543793 logs.go:282] 0 containers: []
	W1212 21:31:19.988203  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:19.988210  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:19.988291  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:20.073250  543793 cri.go:89] found id: ""
	I1212 21:31:20.073275  543793 logs.go:282] 0 containers: []
	W1212 21:31:20.073284  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:20.073291  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:20.073358  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:20.110351  543793 cri.go:89] found id: ""
	I1212 21:31:20.110375  543793 logs.go:282] 0 containers: []
	W1212 21:31:20.110384  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:20.110390  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:20.110451  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:20.151351  543793 cri.go:89] found id: ""
	I1212 21:31:20.151374  543793 logs.go:282] 0 containers: []
	W1212 21:31:20.151383  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:20.151391  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:20.151403  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:20.227433  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:20.227511  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:20.246792  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:20.246818  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:20.344689  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:20.344760  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:20.344789  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:20.379812  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:20.379892  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:22.917422  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:22.929796  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:22.929865  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:22.974918  543793 cri.go:89] found id: ""
	I1212 21:31:22.974941  543793 logs.go:282] 0 containers: []
	W1212 21:31:22.974949  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:22.974955  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:22.975017  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:23.049022  543793 cri.go:89] found id: ""
	I1212 21:31:23.049045  543793 logs.go:282] 0 containers: []
	W1212 21:31:23.049053  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:23.049059  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:23.049121  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:23.095885  543793 cri.go:89] found id: ""
	I1212 21:31:23.095914  543793 logs.go:282] 0 containers: []
	W1212 21:31:23.095924  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:23.095930  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:23.095995  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:23.127979  543793 cri.go:89] found id: ""
	I1212 21:31:23.128005  543793 logs.go:282] 0 containers: []
	W1212 21:31:23.128013  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:23.128019  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:23.128078  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:23.161425  543793 cri.go:89] found id: ""
	I1212 21:31:23.161452  543793 logs.go:282] 0 containers: []
	W1212 21:31:23.161461  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:23.161474  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:23.161530  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:23.202170  543793 cri.go:89] found id: ""
	I1212 21:31:23.202196  543793 logs.go:282] 0 containers: []
	W1212 21:31:23.202205  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:23.202211  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:23.202267  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:23.231092  543793 cri.go:89] found id: ""
	I1212 21:31:23.231118  543793 logs.go:282] 0 containers: []
	W1212 21:31:23.231128  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:23.231134  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:23.231193  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:23.277694  543793 cri.go:89] found id: ""
	I1212 21:31:23.277721  543793 logs.go:282] 0 containers: []
	W1212 21:31:23.277730  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:23.277738  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:23.277750  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:23.317961  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:23.317989  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:23.398182  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:23.398224  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:23.414738  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:23.414761  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:23.497672  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:23.497695  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:23.497708  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:26.035167  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:26.047020  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:26.047111  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:26.075301  543793 cri.go:89] found id: ""
	I1212 21:31:26.075330  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.075340  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:26.075346  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:26.075408  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:26.102920  543793 cri.go:89] found id: ""
	I1212 21:31:26.102944  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.102953  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:26.102959  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:26.103015  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:26.129052  543793 cri.go:89] found id: ""
	I1212 21:31:26.129077  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.129085  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:26.129092  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:26.129149  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:26.154788  543793 cri.go:89] found id: ""
	I1212 21:31:26.154816  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.154827  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:26.154833  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:26.154896  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:26.180649  543793 cri.go:89] found id: ""
	I1212 21:31:26.180672  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.180689  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:26.180696  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:26.180753  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:26.210898  543793 cri.go:89] found id: ""
	I1212 21:31:26.210923  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.210933  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:26.210939  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:26.210997  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:26.235477  543793 cri.go:89] found id: ""
	I1212 21:31:26.235503  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.235512  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:26.235518  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:26.235578  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:26.262577  543793 cri.go:89] found id: ""
	I1212 21:31:26.262602  543793 logs.go:282] 0 containers: []
	W1212 21:31:26.262610  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:26.262619  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:26.262630  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:26.334362  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:26.334407  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:26.350342  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:26.350372  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:26.415513  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:26.415533  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:26.415545  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:26.448032  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:26.448064  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:28.987674  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:28.998235  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:28.998304  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:29.026422  543793 cri.go:89] found id: ""
	I1212 21:31:29.026449  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.026458  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:29.026464  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:29.026523  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:29.052043  543793 cri.go:89] found id: ""
	I1212 21:31:29.052069  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.052077  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:29.052083  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:29.052142  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:29.076949  543793 cri.go:89] found id: ""
	I1212 21:31:29.076975  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.076983  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:29.076990  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:29.077045  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:29.102978  543793 cri.go:89] found id: ""
	I1212 21:31:29.103004  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.103013  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:29.103019  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:29.103074  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:29.127930  543793 cri.go:89] found id: ""
	I1212 21:31:29.127956  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.127965  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:29.127971  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:29.128027  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:29.153686  543793 cri.go:89] found id: ""
	I1212 21:31:29.153711  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.153722  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:29.153729  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:29.153786  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:29.179840  543793 cri.go:89] found id: ""
	I1212 21:31:29.179872  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.179881  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:29.179887  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:29.179946  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:29.205014  543793 cri.go:89] found id: ""
	I1212 21:31:29.205038  543793 logs.go:282] 0 containers: []
	W1212 21:31:29.205047  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:29.205056  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:29.205068  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:29.272826  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:29.272863  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:29.296976  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:29.297005  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:29.366811  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:29.366832  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:29.366846  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:29.397357  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:29.397395  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:31.927383  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:31.937850  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:31.937923  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:31.969739  543793 cri.go:89] found id: ""
	I1212 21:31:31.969766  543793 logs.go:282] 0 containers: []
	W1212 21:31:31.969774  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:31.969781  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:31.969837  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:31.998739  543793 cri.go:89] found id: ""
	I1212 21:31:31.998766  543793 logs.go:282] 0 containers: []
	W1212 21:31:31.998775  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:31.998781  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:31.998838  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:32.031870  543793 cri.go:89] found id: ""
	I1212 21:31:32.031897  543793 logs.go:282] 0 containers: []
	W1212 21:31:32.031906  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:32.031912  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:32.031971  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:32.065140  543793 cri.go:89] found id: ""
	I1212 21:31:32.065164  543793 logs.go:282] 0 containers: []
	W1212 21:31:32.065173  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:32.065179  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:32.065236  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:32.091644  543793 cri.go:89] found id: ""
	I1212 21:31:32.091674  543793 logs.go:282] 0 containers: []
	W1212 21:31:32.091683  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:32.091689  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:32.091750  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:32.123569  543793 cri.go:89] found id: ""
	I1212 21:31:32.123600  543793 logs.go:282] 0 containers: []
	W1212 21:31:32.123611  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:32.123618  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:32.123685  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:32.153334  543793 cri.go:89] found id: ""
	I1212 21:31:32.153359  543793 logs.go:282] 0 containers: []
	W1212 21:31:32.153367  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:32.153373  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:32.153438  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:32.179928  543793 cri.go:89] found id: ""
	I1212 21:31:32.179954  543793 logs.go:282] 0 containers: []
	W1212 21:31:32.179963  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:32.179972  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:32.179983  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:32.245195  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:32.245219  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:32.245231  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:32.280072  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:32.280117  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:32.309601  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:32.309630  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:32.377614  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:32.377651  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:34.896506  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:34.909521  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:34.909595  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:34.937491  543793 cri.go:89] found id: ""
	I1212 21:31:34.937522  543793 logs.go:282] 0 containers: []
	W1212 21:31:34.937532  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:34.937539  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:34.937600  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:34.967939  543793 cri.go:89] found id: ""
	I1212 21:31:34.967963  543793 logs.go:282] 0 containers: []
	W1212 21:31:34.967972  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:34.967978  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:34.968037  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:34.998817  543793 cri.go:89] found id: ""
	I1212 21:31:34.998844  543793 logs.go:282] 0 containers: []
	W1212 21:31:34.998853  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:34.998859  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:34.998922  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:35.033149  543793 cri.go:89] found id: ""
	I1212 21:31:35.033181  543793 logs.go:282] 0 containers: []
	W1212 21:31:35.033190  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:35.033196  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:35.033255  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:35.059836  543793 cri.go:89] found id: ""
	I1212 21:31:35.059858  543793 logs.go:282] 0 containers: []
	W1212 21:31:35.059874  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:35.059881  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:35.059938  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:35.088430  543793 cri.go:89] found id: ""
	I1212 21:31:35.088461  543793 logs.go:282] 0 containers: []
	W1212 21:31:35.088475  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:35.088482  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:35.088601  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:35.114732  543793 cri.go:89] found id: ""
	I1212 21:31:35.114759  543793 logs.go:282] 0 containers: []
	W1212 21:31:35.114768  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:35.114774  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:35.114835  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:35.140404  543793 cri.go:89] found id: ""
	I1212 21:31:35.140433  543793 logs.go:282] 0 containers: []
	W1212 21:31:35.140442  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:35.140451  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:35.140464  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:35.208018  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:35.208055  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:35.225914  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:35.225941  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:35.301270  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:35.301292  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:35.301305  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:35.332399  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:35.332435  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:37.862511  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:37.873167  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:37.873243  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:37.902074  543793 cri.go:89] found id: ""
	I1212 21:31:37.902104  543793 logs.go:282] 0 containers: []
	W1212 21:31:37.902114  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:37.902121  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:37.902179  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:37.929827  543793 cri.go:89] found id: ""
	I1212 21:31:37.929854  543793 logs.go:282] 0 containers: []
	W1212 21:31:37.929865  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:37.929871  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:37.929930  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:37.957332  543793 cri.go:89] found id: ""
	I1212 21:31:37.957355  543793 logs.go:282] 0 containers: []
	W1212 21:31:37.957369  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:37.957375  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:37.957434  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:37.995649  543793 cri.go:89] found id: ""
	I1212 21:31:37.995673  543793 logs.go:282] 0 containers: []
	W1212 21:31:37.995683  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:37.995689  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:37.995748  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:38.033689  543793 cri.go:89] found id: ""
	I1212 21:31:38.033720  543793 logs.go:282] 0 containers: []
	W1212 21:31:38.033730  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:38.033736  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:38.033803  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:38.065337  543793 cri.go:89] found id: ""
	I1212 21:31:38.065364  543793 logs.go:282] 0 containers: []
	W1212 21:31:38.065373  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:38.065380  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:38.065441  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:38.094078  543793 cri.go:89] found id: ""
	I1212 21:31:38.094106  543793 logs.go:282] 0 containers: []
	W1212 21:31:38.094115  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:38.094122  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:38.094181  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:38.121822  543793 cri.go:89] found id: ""
	I1212 21:31:38.121847  543793 logs.go:282] 0 containers: []
	W1212 21:31:38.121856  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:38.121865  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:38.121876  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:38.188444  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:38.188480  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:38.204961  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:38.204991  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:38.278834  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:38.278867  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:38.278881  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:38.310257  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:38.310292  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:40.841863  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:40.851915  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:40.851990  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:40.881114  543793 cri.go:89] found id: ""
	I1212 21:31:40.881137  543793 logs.go:282] 0 containers: []
	W1212 21:31:40.881147  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:40.881153  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:40.881212  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:40.911762  543793 cri.go:89] found id: ""
	I1212 21:31:40.911790  543793 logs.go:282] 0 containers: []
	W1212 21:31:40.911799  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:40.911815  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:40.911883  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:40.938445  543793 cri.go:89] found id: ""
	I1212 21:31:40.938474  543793 logs.go:282] 0 containers: []
	W1212 21:31:40.938482  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:40.938488  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:40.938549  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:40.972198  543793 cri.go:89] found id: ""
	I1212 21:31:40.972226  543793 logs.go:282] 0 containers: []
	W1212 21:31:40.972235  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:40.972243  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:40.972300  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:41.012169  543793 cri.go:89] found id: ""
	I1212 21:31:41.012198  543793 logs.go:282] 0 containers: []
	W1212 21:31:41.012208  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:41.012215  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:41.012280  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:41.042719  543793 cri.go:89] found id: ""
	I1212 21:31:41.042747  543793 logs.go:282] 0 containers: []
	W1212 21:31:41.042756  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:41.042762  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:41.042822  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:41.079530  543793 cri.go:89] found id: ""
	I1212 21:31:41.079558  543793 logs.go:282] 0 containers: []
	W1212 21:31:41.079567  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:41.079579  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:41.079656  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:41.111640  543793 cri.go:89] found id: ""
	I1212 21:31:41.111666  543793 logs.go:282] 0 containers: []
	W1212 21:31:41.111674  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:41.111684  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:41.111695  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:41.179787  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:41.179810  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:41.179822  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:41.210433  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:41.210469  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:41.241231  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:41.241310  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:41.311712  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:41.311748  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:43.828837  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:43.842534  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:43.842603  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:43.872549  543793 cri.go:89] found id: ""
	I1212 21:31:43.872571  543793 logs.go:282] 0 containers: []
	W1212 21:31:43.872580  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:43.872586  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:43.872641  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:43.906902  543793 cri.go:89] found id: ""
	I1212 21:31:43.906924  543793 logs.go:282] 0 containers: []
	W1212 21:31:43.906932  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:43.906938  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:43.906995  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:43.937886  543793 cri.go:89] found id: ""
	I1212 21:31:43.937908  543793 logs.go:282] 0 containers: []
	W1212 21:31:43.937915  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:43.937921  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:43.937984  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:44.002472  543793 cri.go:89] found id: ""
	I1212 21:31:44.002499  543793 logs.go:282] 0 containers: []
	W1212 21:31:44.002508  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:44.002515  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:44.002583  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:44.069264  543793 cri.go:89] found id: ""
	I1212 21:31:44.069345  543793 logs.go:282] 0 containers: []
	W1212 21:31:44.069371  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:44.069392  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:44.069501  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:44.097589  543793 cri.go:89] found id: ""
	I1212 21:31:44.097663  543793 logs.go:282] 0 containers: []
	W1212 21:31:44.097687  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:44.097709  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:44.097815  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:44.130987  543793 cri.go:89] found id: ""
	I1212 21:31:44.131063  543793 logs.go:282] 0 containers: []
	W1212 21:31:44.131086  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:44.131106  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:44.131215  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:44.157091  543793 cri.go:89] found id: ""
	I1212 21:31:44.157114  543793 logs.go:282] 0 containers: []
	W1212 21:31:44.157123  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:44.157133  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:44.157144  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:44.229960  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:44.230018  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:44.246114  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:44.246142  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:44.316292  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:44.316313  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:44.316325  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:44.351274  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:44.351312  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:46.887962  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:46.899393  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:46.899459  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:46.926696  543793 cri.go:89] found id: ""
	I1212 21:31:46.926725  543793 logs.go:282] 0 containers: []
	W1212 21:31:46.926735  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:46.926741  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:46.926800  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:46.970551  543793 cri.go:89] found id: ""
	I1212 21:31:46.970578  543793 logs.go:282] 0 containers: []
	W1212 21:31:46.970587  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:46.970593  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:46.970651  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:47.042298  543793 cri.go:89] found id: ""
	I1212 21:31:47.042324  543793 logs.go:282] 0 containers: []
	W1212 21:31:47.042333  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:47.042339  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:47.042400  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:47.110468  543793 cri.go:89] found id: ""
	I1212 21:31:47.110495  543793 logs.go:282] 0 containers: []
	W1212 21:31:47.110505  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:47.110510  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:47.110570  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:47.149747  543793 cri.go:89] found id: ""
	I1212 21:31:47.149771  543793 logs.go:282] 0 containers: []
	W1212 21:31:47.149780  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:47.149785  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:47.149852  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:47.196751  543793 cri.go:89] found id: ""
	I1212 21:31:47.196780  543793 logs.go:282] 0 containers: []
	W1212 21:31:47.196789  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:47.196795  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:47.196854  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:47.230960  543793 cri.go:89] found id: ""
	I1212 21:31:47.230983  543793 logs.go:282] 0 containers: []
	W1212 21:31:47.230992  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:47.231000  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:47.231068  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:47.262810  543793 cri.go:89] found id: ""
	I1212 21:31:47.262834  543793 logs.go:282] 0 containers: []
	W1212 21:31:47.262842  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:47.262851  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:47.262862  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:47.348233  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:47.348251  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:47.348265  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:47.385092  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:47.385123  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:47.427385  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:47.427409  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:47.511520  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:47.511599  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:50.041608  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:50.053346  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:50.053452  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:50.080405  543793 cri.go:89] found id: ""
	I1212 21:31:50.080433  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.080444  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:50.080451  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:50.080512  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:50.119135  543793 cri.go:89] found id: ""
	I1212 21:31:50.119160  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.119169  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:50.119175  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:50.119233  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:50.157617  543793 cri.go:89] found id: ""
	I1212 21:31:50.157643  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.157655  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:50.157662  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:50.157718  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:50.187646  543793 cri.go:89] found id: ""
	I1212 21:31:50.187672  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.187680  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:50.187686  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:50.187742  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:50.218411  543793 cri.go:89] found id: ""
	I1212 21:31:50.218435  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.218443  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:50.218448  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:50.218496  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:50.260058  543793 cri.go:89] found id: ""
	I1212 21:31:50.260085  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.260095  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:50.260100  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:50.260158  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:50.296713  543793 cri.go:89] found id: ""
	I1212 21:31:50.296739  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.296748  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:50.296754  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:50.296812  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:50.329386  543793 cri.go:89] found id: ""
	I1212 21:31:50.329418  543793 logs.go:282] 0 containers: []
	W1212 21:31:50.329430  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:50.329441  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:50.329452  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:50.362431  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:50.362459  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:50.434414  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:50.434452  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:50.453125  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:50.453153  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:50.533548  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:50.533569  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:50.533582  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:53.069570  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:53.079835  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:53.079915  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:53.109310  543793 cri.go:89] found id: ""
	I1212 21:31:53.109336  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.109345  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:53.109351  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:53.109413  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:53.135401  543793 cri.go:89] found id: ""
	I1212 21:31:53.135427  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.135442  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:53.135449  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:53.135508  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:53.160646  543793 cri.go:89] found id: ""
	I1212 21:31:53.160670  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.160678  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:53.160684  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:53.160739  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:53.190874  543793 cri.go:89] found id: ""
	I1212 21:31:53.190901  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.190919  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:53.190926  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:53.190986  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:53.216322  543793 cri.go:89] found id: ""
	I1212 21:31:53.216347  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.216356  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:53.216363  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:53.216444  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:53.243418  543793 cri.go:89] found id: ""
	I1212 21:31:53.243443  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.243452  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:53.243457  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:53.243516  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:53.267916  543793 cri.go:89] found id: ""
	I1212 21:31:53.267941  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.267951  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:53.267957  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:53.268015  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:53.293528  543793 cri.go:89] found id: ""
	I1212 21:31:53.293553  543793 logs.go:282] 0 containers: []
	W1212 21:31:53.293562  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:53.293571  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:53.293583  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:53.361977  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:53.362001  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:53.362013  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:53.393087  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:53.393122  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:53.425767  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:53.425795  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:53.492377  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:53.492410  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:56.011638  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:56.023502  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:56.023577  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:56.050058  543793 cri.go:89] found id: ""
	I1212 21:31:56.050082  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.050090  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:56.050097  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:56.050155  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:56.079661  543793 cri.go:89] found id: ""
	I1212 21:31:56.079684  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.079693  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:56.079698  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:56.079757  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:56.106555  543793 cri.go:89] found id: ""
	I1212 21:31:56.106581  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.106589  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:56.106596  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:56.106655  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:56.132630  543793 cri.go:89] found id: ""
	I1212 21:31:56.132656  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.132665  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:56.132677  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:56.132734  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:56.158473  543793 cri.go:89] found id: ""
	I1212 21:31:56.158499  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.158508  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:56.158514  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:56.158572  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:56.184302  543793 cri.go:89] found id: ""
	I1212 21:31:56.184328  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.184337  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:56.184343  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:56.184431  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:56.215573  543793 cri.go:89] found id: ""
	I1212 21:31:56.215598  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.215608  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:56.215614  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:56.215676  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:56.244335  543793 cri.go:89] found id: ""
	I1212 21:31:56.244385  543793 logs.go:282] 0 containers: []
	W1212 21:31:56.244394  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:56.244403  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:56.244414  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:56.310290  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:56.310327  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:56.327129  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:56.327156  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:56.391547  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:56.391569  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:56.391582  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:56.421837  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:56.421875  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:31:58.956515  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:31:58.976280  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:31:58.976355  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:31:59.010255  543793 cri.go:89] found id: ""
	I1212 21:31:59.010287  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.010295  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:31:59.010301  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:31:59.010362  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:31:59.052897  543793 cri.go:89] found id: ""
	I1212 21:31:59.052926  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.052934  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:31:59.052940  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:31:59.052996  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:31:59.102667  543793 cri.go:89] found id: ""
	I1212 21:31:59.102695  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.102703  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:31:59.102709  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:31:59.102763  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:31:59.130639  543793 cri.go:89] found id: ""
	I1212 21:31:59.130668  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.130678  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:31:59.130686  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:31:59.130741  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:31:59.166163  543793 cri.go:89] found id: ""
	I1212 21:31:59.166193  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.166202  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:31:59.166208  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:31:59.166318  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:31:59.193030  543793 cri.go:89] found id: ""
	I1212 21:31:59.193059  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.193068  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:31:59.193073  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:31:59.193129  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:31:59.226663  543793 cri.go:89] found id: ""
	I1212 21:31:59.226692  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.226700  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:31:59.226706  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:31:59.226772  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:31:59.255359  543793 cri.go:89] found id: ""
	I1212 21:31:59.255392  543793 logs.go:282] 0 containers: []
	W1212 21:31:59.255401  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:31:59.255411  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:31:59.255423  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:31:59.345147  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:31:59.345187  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:31:59.361792  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:31:59.361833  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:31:59.442346  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:31:59.442372  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:31:59.442387  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:31:59.477381  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:31:59.477418  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:32:02.012674  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:02.024531  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:32:02.024605  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:32:02.051909  543793 cri.go:89] found id: ""
	I1212 21:32:02.051933  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.051941  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:32:02.051947  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:32:02.052007  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:32:02.078441  543793 cri.go:89] found id: ""
	I1212 21:32:02.078467  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.078476  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:32:02.078483  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:32:02.078542  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:32:02.105154  543793 cri.go:89] found id: ""
	I1212 21:32:02.105180  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.105189  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:32:02.105195  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:32:02.105256  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:32:02.141346  543793 cri.go:89] found id: ""
	I1212 21:32:02.141375  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.141384  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:32:02.141391  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:32:02.141450  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:32:02.167929  543793 cri.go:89] found id: ""
	I1212 21:32:02.167955  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.167964  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:32:02.167971  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:32:02.168030  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:32:02.193725  543793 cri.go:89] found id: ""
	I1212 21:32:02.193749  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.193758  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:32:02.193764  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:32:02.193821  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:32:02.227109  543793 cri.go:89] found id: ""
	I1212 21:32:02.227187  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.227212  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:32:02.227232  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:32:02.227324  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:32:02.255125  543793 cri.go:89] found id: ""
	I1212 21:32:02.255151  543793 logs.go:282] 0 containers: []
	W1212 21:32:02.255160  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:32:02.255168  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:32:02.255183  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:32:02.295877  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:32:02.295929  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:32:02.366019  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:32:02.366057  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:32:02.383637  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:32:02.383670  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:32:02.449436  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:32:02.449509  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:32:02.449539  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:32:04.982191  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:04.992811  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:32:04.992888  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:32:05.029997  543793 cri.go:89] found id: ""
	I1212 21:32:05.030025  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.030034  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:32:05.030040  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:32:05.030099  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:32:05.057869  543793 cri.go:89] found id: ""
	I1212 21:32:05.057898  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.057907  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:32:05.057913  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:32:05.057968  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:32:05.084455  543793 cri.go:89] found id: ""
	I1212 21:32:05.084484  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.084494  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:32:05.084501  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:32:05.084562  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:32:05.110872  543793 cri.go:89] found id: ""
	I1212 21:32:05.110901  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.110910  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:32:05.110917  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:32:05.110997  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:32:05.137414  543793 cri.go:89] found id: ""
	I1212 21:32:05.137440  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.137449  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:32:05.137455  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:32:05.137514  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:32:05.165119  543793 cri.go:89] found id: ""
	I1212 21:32:05.165147  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.165156  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:32:05.165162  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:32:05.165239  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:32:05.191553  543793 cri.go:89] found id: ""
	I1212 21:32:05.191585  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.191594  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:32:05.191615  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:32:05.191694  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:32:05.220066  543793 cri.go:89] found id: ""
	I1212 21:32:05.220106  543793 logs.go:282] 0 containers: []
	W1212 21:32:05.220115  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:32:05.220124  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:32:05.220163  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:32:05.251342  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:32:05.251377  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:32:05.318877  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:32:05.318914  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:32:05.335021  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:32:05.335054  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:32:05.397138  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:32:05.397160  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:32:05.397175  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:32:07.928825  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:07.939422  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:32:07.939498  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:32:07.981875  543793 cri.go:89] found id: ""
	I1212 21:32:07.981900  543793 logs.go:282] 0 containers: []
	W1212 21:32:07.981909  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:32:07.981915  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:32:07.981974  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:32:08.023190  543793 cri.go:89] found id: ""
	I1212 21:32:08.023213  543793 logs.go:282] 0 containers: []
	W1212 21:32:08.023222  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:32:08.023228  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:32:08.023288  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:32:08.053796  543793 cri.go:89] found id: ""
	I1212 21:32:08.053820  543793 logs.go:282] 0 containers: []
	W1212 21:32:08.053830  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:32:08.053835  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:32:08.053897  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:32:08.084206  543793 cri.go:89] found id: ""
	I1212 21:32:08.084237  543793 logs.go:282] 0 containers: []
	W1212 21:32:08.084246  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:32:08.084253  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:32:08.084316  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:32:08.111780  543793 cri.go:89] found id: ""
	I1212 21:32:08.111810  543793 logs.go:282] 0 containers: []
	W1212 21:32:08.111819  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:32:08.111825  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:32:08.111884  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:32:08.143130  543793 cri.go:89] found id: ""
	I1212 21:32:08.143153  543793 logs.go:282] 0 containers: []
	W1212 21:32:08.143162  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:32:08.143169  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:32:08.143229  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:32:08.169133  543793 cri.go:89] found id: ""
	I1212 21:32:08.169159  543793 logs.go:282] 0 containers: []
	W1212 21:32:08.169167  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:32:08.169173  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:32:08.169229  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:32:08.195984  543793 cri.go:89] found id: ""
	I1212 21:32:08.196010  543793 logs.go:282] 0 containers: []
	W1212 21:32:08.196019  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:32:08.196029  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:32:08.196041  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:32:08.263676  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:32:08.263713  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:32:08.280616  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:32:08.280649  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:32:08.350189  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:32:08.350211  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:32:08.350225  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:32:08.381052  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:32:08.381087  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:32:10.911342  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:10.922181  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:32:10.922252  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:32:10.953247  543793 cri.go:89] found id: ""
	I1212 21:32:10.953284  543793 logs.go:282] 0 containers: []
	W1212 21:32:10.953293  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:32:10.953300  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:32:10.953382  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:32:10.997176  543793 cri.go:89] found id: ""
	I1212 21:32:10.997216  543793 logs.go:282] 0 containers: []
	W1212 21:32:10.997225  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:32:10.997231  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:32:10.997332  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:32:11.027957  543793 cri.go:89] found id: ""
	I1212 21:32:11.027988  543793 logs.go:282] 0 containers: []
	W1212 21:32:11.027997  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:32:11.028004  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:32:11.028093  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:32:11.054915  543793 cri.go:89] found id: ""
	I1212 21:32:11.054958  543793 logs.go:282] 0 containers: []
	W1212 21:32:11.054968  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:32:11.054974  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:32:11.055069  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:32:11.080973  543793 cri.go:89] found id: ""
	I1212 21:32:11.081048  543793 logs.go:282] 0 containers: []
	W1212 21:32:11.081063  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:32:11.081070  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:32:11.081135  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:32:11.107612  543793 cri.go:89] found id: ""
	I1212 21:32:11.107638  543793 logs.go:282] 0 containers: []
	W1212 21:32:11.107648  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:32:11.107654  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:32:11.107715  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:32:11.135011  543793 cri.go:89] found id: ""
	I1212 21:32:11.135046  543793 logs.go:282] 0 containers: []
	W1212 21:32:11.135055  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:32:11.135062  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:32:11.135130  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:32:11.168631  543793 cri.go:89] found id: ""
	I1212 21:32:11.168657  543793 logs.go:282] 0 containers: []
	W1212 21:32:11.168665  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:32:11.168675  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:32:11.168686  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:32:11.235710  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:32:11.235752  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:32:11.253280  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:32:11.253312  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:32:11.321049  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:32:11.321114  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:32:11.321145  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:32:11.352043  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:32:11.352080  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1212 21:32:13.880062  543793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:32:13.889930  543793 kubeadm.go:602] duration metric: took 4m4.292019214s to restartPrimaryControlPlane
	W1212 21:32:13.890000  543793 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1212 21:32:13.890060  543793 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:32:14.304046  543793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:32:14.317315  543793 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1212 21:32:14.325491  543793 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:32:14.325555  543793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:32:14.333414  543793 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:32:14.333435  543793 kubeadm.go:158] found existing configuration files:
	
	I1212 21:32:14.333513  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:32:14.341318  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:32:14.341389  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:32:14.349355  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:32:14.357387  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:32:14.357454  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:32:14.365602  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:32:14.373495  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:32:14.373593  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:32:14.381367  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:32:14.391443  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:32:14.391561  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:32:14.399125  543793 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:32:14.507628  543793 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 21:32:14.508096  543793 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:32:14.574328  543793 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:36:16.143137  543793 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1212 21:36:16.143170  543793 kubeadm.go:319] 
	I1212 21:36:16.143241  543793 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:36:16.147335  543793 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:36:16.147397  543793 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:36:16.147484  543793 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:36:16.147538  543793 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 21:36:16.147572  543793 kubeadm.go:319] OS: Linux
	I1212 21:36:16.147614  543793 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:36:16.147659  543793 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:36:16.147704  543793 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:36:16.147749  543793 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:36:16.147795  543793 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:36:16.147841  543793 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:36:16.147884  543793 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:36:16.147929  543793 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:36:16.147973  543793 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:36:16.148041  543793 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:36:16.148137  543793 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:36:16.148223  543793 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:36:16.148282  543793 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:36:16.151514  543793 out.go:252]   - Generating certificates and keys ...
	I1212 21:36:16.151620  543793 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:36:16.151693  543793 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:36:16.151775  543793 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:36:16.151840  543793 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:36:16.151912  543793 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:36:16.151970  543793 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:36:16.152038  543793 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:36:16.152108  543793 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:36:16.152188  543793 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:36:16.152267  543793 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:36:16.152313  543793 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:36:16.152398  543793 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:36:16.152455  543793 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:36:16.152517  543793 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:36:16.152575  543793 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:36:16.152642  543793 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:36:16.152701  543793 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:36:16.152821  543793 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:36:16.152891  543793 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:36:16.156013  543793 out.go:252]   - Booting up control plane ...
	I1212 21:36:16.156158  543793 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:36:16.156304  543793 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:36:16.156431  543793 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:36:16.156540  543793 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:36:16.156633  543793 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:36:16.156735  543793 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:36:16.156819  543793 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:36:16.156860  543793 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:36:16.156988  543793 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:36:16.157106  543793 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:36:16.157173  543793 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001157988s
	I1212 21:36:16.157181  543793 kubeadm.go:319] 
	I1212 21:36:16.157235  543793 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:36:16.157269  543793 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:36:16.157370  543793 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:36:16.157377  543793 kubeadm.go:319] 
	I1212 21:36:16.157475  543793 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:36:16.157510  543793 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:36:16.157543  543793 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	W1212 21:36:16.157658  543793 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001157988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001157988s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1212 21:36:16.157745  543793 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1212 21:36:16.158019  543793 kubeadm.go:319] 
	I1212 21:36:16.568793  543793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:36:16.581725  543793 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1212 21:36:16.581845  543793 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1212 21:36:16.589944  543793 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1212 21:36:16.589975  543793 kubeadm.go:158] found existing configuration files:
	
	I1212 21:36:16.590099  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1212 21:36:16.598207  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1212 21:36:16.598293  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1212 21:36:16.606023  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1212 21:36:16.613715  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1212 21:36:16.613791  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1212 21:36:16.621478  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1212 21:36:16.629419  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1212 21:36:16.629485  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1212 21:36:16.637527  543793 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1212 21:36:16.645897  543793 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1212 21:36:16.645988  543793 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1212 21:36:16.653557  543793 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1212 21:36:16.693943  543793 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1212 21:36:16.694343  543793 kubeadm.go:319] [preflight] Running pre-flight checks
	I1212 21:36:16.774737  543793 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1212 21:36:16.774839  543793 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1212 21:36:16.774907  543793 kubeadm.go:319] OS: Linux
	I1212 21:36:16.774986  543793 kubeadm.go:319] CGROUPS_CPU: enabled
	I1212 21:36:16.775055  543793 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1212 21:36:16.775111  543793 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1212 21:36:16.775165  543793 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1212 21:36:16.775218  543793 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1212 21:36:16.775277  543793 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1212 21:36:16.775328  543793 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1212 21:36:16.775401  543793 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1212 21:36:16.775470  543793 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1212 21:36:16.842041  543793 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1212 21:36:16.842158  543793 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1212 21:36:16.842261  543793 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1212 21:36:16.856928  543793 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1212 21:36:16.860543  543793 out.go:252]   - Generating certificates and keys ...
	I1212 21:36:16.860659  543793 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1212 21:36:16.860761  543793 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1212 21:36:16.860853  543793 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1212 21:36:16.860942  543793 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1212 21:36:16.861056  543793 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1212 21:36:16.861132  543793 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1212 21:36:16.861228  543793 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1212 21:36:16.861311  543793 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1212 21:36:16.861408  543793 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1212 21:36:16.861500  543793 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1212 21:36:16.861574  543793 kubeadm.go:319] [certs] Using the existing "sa" key
	I1212 21:36:16.861657  543793 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1212 21:36:17.004822  543793 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1212 21:36:17.148549  543793 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1212 21:36:17.567730  543793 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1212 21:36:17.802089  543793 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1212 21:36:17.992972  543793 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1212 21:36:17.994060  543793 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1212 21:36:17.997016  543793 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1212 21:36:18.000220  543793 out.go:252]   - Booting up control plane ...
	I1212 21:36:18.000340  543793 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1212 21:36:18.000442  543793 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1212 21:36:18.001882  543793 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1212 21:36:18.029128  543793 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1212 21:36:18.029232  543793 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1212 21:36:18.042915  543793 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1212 21:36:18.044911  543793 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1212 21:36:18.045845  543793 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1212 21:36:18.168675  543793 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1212 21:36:18.168790  543793 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1212 21:40:18.168722  543793 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000129118s
	I1212 21:40:18.168757  543793 kubeadm.go:319] 
	I1212 21:40:18.168815  543793 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:40:18.168849  543793 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:40:18.168974  543793 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:40:18.168999  543793 kubeadm.go:319] 
	I1212 21:40:18.169100  543793 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:40:18.169138  543793 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:40:18.169168  543793 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:40:18.169174  543793 kubeadm.go:319] 
	I1212 21:40:18.173545  543793 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 21:40:18.173970  543793 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:40:18.174086  543793 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:40:18.174354  543793 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:40:18.174364  543793 kubeadm.go:319] 
	I1212 21:40:18.174433  543793 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:40:18.174491  543793 kubeadm.go:403] duration metric: took 12m8.649146692s to StartCluster
	I1212 21:40:18.174528  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:40:18.174586  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:40:18.205395  543793 cri.go:89] found id: ""
	I1212 21:40:18.205411  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.205419  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:40:18.205438  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:40:18.205484  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:40:18.266438  543793 cri.go:89] found id: ""
	I1212 21:40:18.266460  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.266469  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:40:18.266474  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:40:18.266531  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:40:18.306994  543793 cri.go:89] found id: ""
	I1212 21:40:18.307017  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.307027  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:40:18.307033  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:40:18.307093  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:40:18.340681  543793 cri.go:89] found id: ""
	I1212 21:40:18.340704  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.340713  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:40:18.340719  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:40:18.340775  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:40:18.374180  543793 cri.go:89] found id: ""
	I1212 21:40:18.374203  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.374211  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:40:18.374217  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:40:18.374273  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:40:18.411000  543793 cri.go:89] found id: ""
	I1212 21:40:18.411023  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.411032  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:40:18.411044  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:40:18.411098  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:40:18.438624  543793 cri.go:89] found id: ""
	I1212 21:40:18.438646  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.438655  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:40:18.438660  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:40:18.438715  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:40:18.474576  543793 cri.go:89] found id: ""
	I1212 21:40:18.474604  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.474618  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:40:18.474629  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:40:18.474647  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:40:18.558047  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:40:18.558084  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:40:18.576827  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:40:18.576928  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:40:18.657911  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:40:18.657936  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:40:18.657952  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:40:18.699088  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:40:18.699127  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:40:18.783938  543793 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:40:18.783994  543793 out.go:285] * 
	* 
	W1212 21:40:18.784053  543793 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:40:18.784072  543793 out.go:285] * 
	* 
	W1212 21:40:18.786302  543793 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:40:18.791587  543793 out.go:203] 
	W1212 21:40:18.793663  543793 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:40:18.793710  543793 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:40:18.793732  543793 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:40:18.796803  543793 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-linux-arm64 start -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-905307 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-905307 version --output=json: exit status 1 (161.456117ms)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "33",
	    "gitVersion": "v1.33.2",
	    "gitCommit": "a57b6f7709f6c2722b92f07b8b4c48210a51fc40",
	    "gitTreeState": "clean",
	    "buildDate": "2025-06-17T18:41:31Z",
	    "goVersion": "go1.24.4",
	    "compiler": "gc",
	    "platform": "linux/arm64"
	  },
	  "kustomizeVersion": "v5.6.0"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	The connection to the server 192.168.76.2:8443 was refused - did you specify the right host or port?

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-12 21:40:19.743890383 +0000 UTC m=+5447.305549421
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-905307
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-905307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "36bbd1563d1160f49fd1a80fa6118509f887e599dfd414f92c6ded256a9a2434",
	        "Created": "2025-12-12T21:27:17.552960705Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 543982,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:27:56.793391704Z",
	            "FinishedAt": "2025-12-12T21:27:55.80353952Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/36bbd1563d1160f49fd1a80fa6118509f887e599dfd414f92c6ded256a9a2434/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/36bbd1563d1160f49fd1a80fa6118509f887e599dfd414f92c6ded256a9a2434/hostname",
	        "HostsPath": "/var/lib/docker/containers/36bbd1563d1160f49fd1a80fa6118509f887e599dfd414f92c6ded256a9a2434/hosts",
	        "LogPath": "/var/lib/docker/containers/36bbd1563d1160f49fd1a80fa6118509f887e599dfd414f92c6ded256a9a2434/36bbd1563d1160f49fd1a80fa6118509f887e599dfd414f92c6ded256a9a2434-json.log",
	        "Name": "/kubernetes-upgrade-905307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-905307:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-905307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "36bbd1563d1160f49fd1a80fa6118509f887e599dfd414f92c6ded256a9a2434",
	                "LowerDir": "/var/lib/docker/overlay2/e2296d75386a9a6a616f2347e430571342a6276fec4fbe66726047eab233d730-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e2296d75386a9a6a616f2347e430571342a6276fec4fbe66726047eab233d730/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e2296d75386a9a6a616f2347e430571342a6276fec4fbe66726047eab233d730/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e2296d75386a9a6a616f2347e430571342a6276fec4fbe66726047eab233d730/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-905307",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-905307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-905307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-905307",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-905307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c4d4c77a7ddf18ecad40b1dbb45ebd56d60b75877b1869d9bb87e49ffd2ed0f",
	            "SandboxKey": "/var/run/docker/netns/6c4d4c77a7dd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33385"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33386"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33389"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33387"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33388"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-905307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:07:29:83:a7:b6",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6b32de8ea76025bb40b74bd2f4459d20ceec42f7905b22f259568a753d4cc465",
	                    "EndpointID": "e6ff60e0b52f95bb406191db9f6337523e6a9585f9c94a51344e82f8cccbffda",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-905307",
	                        "36bbd1563d11"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-905307 -n kubernetes-upgrade-905307
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p kubernetes-upgrade-905307 -n kubernetes-upgrade-905307: exit status 2 (412.975382ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-905307 logs -n 25
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ delete  │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-406866 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │                     │
	│ stop    │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p NoKubernetes-406866 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p missing-upgrade-992322 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-992322    │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ ssh     │ -p NoKubernetes-406866 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │                     │
	│ delete  │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ delete  │ -p missing-upgrade-992322                                                                                                                       │ missing-upgrade-992322    │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ stop    │ -p kubernetes-upgrade-905307                                                                                                                    │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p stopped-upgrade-302169 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-302169    │ jenkins │ v1.35.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:28 UTC │
	│ start   │ -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │                     │
	│ stop    │ stopped-upgrade-302169 stop                                                                                                                     │ stopped-upgrade-302169    │ jenkins │ v1.35.0 │ 12 Dec 25 21:28 UTC │ 12 Dec 25 21:28 UTC │
	│ start   │ -p stopped-upgrade-302169 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-302169    │ jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │ 12 Dec 25 21:32 UTC │
	│ delete  │ -p stopped-upgrade-302169                                                                                                                       │ stopped-upgrade-302169    │ jenkins │ v1.37.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:33 UTC │
	│ start   │ -p running-upgrade-649209 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-649209    │ jenkins │ v1.35.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:33 UTC │
	│ start   │ -p running-upgrade-649209 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-649209    │ jenkins │ v1.37.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:38 UTC │
	│ delete  │ -p running-upgrade-649209                                                                                                                       │ running-upgrade-649209    │ jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ start   │ -p pause-634913 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:39 UTC │
	│ start   │ -p pause-634913 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │ 12 Dec 25 21:39 UTC │
	│ pause   │ -p pause-634913 --alsologtostderr -v=5                                                                                                          │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │                     │
	│ delete  │ -p pause-634913                                                                                                                                 │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:40 UTC │ 12 Dec 25 21:40 UTC │
	│ start   │ -p force-systemd-flag-700267 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                     │ force-systemd-flag-700267 │ jenkins │ v1.37.0 │ 12 Dec 25 21:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:40:07
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:40:07.804558  579944 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:40:07.804745  579944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:40:07.804759  579944 out.go:374] Setting ErrFile to fd 2...
	I1212 21:40:07.804766  579944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:40:07.805039  579944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:40:07.805471  579944 out.go:368] Setting JSON to false
	I1212 21:40:07.806386  579944 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15760,"bootTime":1765559848,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:40:07.806456  579944 start.go:143] virtualization:  
	I1212 21:40:07.809927  579944 out.go:179] * [force-systemd-flag-700267] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:40:07.814356  579944 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:40:07.814498  579944 notify.go:221] Checking for updates...
	I1212 21:40:07.820673  579944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:40:07.823981  579944 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:40:07.827104  579944 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:40:07.830211  579944 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:40:07.833184  579944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:40:07.836668  579944 config.go:182] Loaded profile config "kubernetes-upgrade-905307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 21:40:07.836838  579944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:40:07.858803  579944 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:40:07.858937  579944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:40:07.923992  579944 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 21:40:07.914644785 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:40:07.924111  579944 docker.go:319] overlay module found
	I1212 21:40:07.927299  579944 out.go:179] * Using the docker driver based on user configuration
	I1212 21:40:07.930212  579944 start.go:309] selected driver: docker
	I1212 21:40:07.930230  579944 start.go:927] validating driver "docker" against <nil>
	I1212 21:40:07.930245  579944 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:40:07.930994  579944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:40:07.997567  579944 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 21:40:07.98744934 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:40:07.997746  579944 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 21:40:07.997981  579944 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 21:40:08.005245  579944 out.go:179] * Using Docker driver with root privileges
	I1212 21:40:08.009006  579944 cni.go:84] Creating CNI manager for ""
	I1212 21:40:08.009095  579944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 21:40:08.009116  579944 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 21:40:08.009223  579944 start.go:353] cluster config:
	{Name:force-systemd-flag-700267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-700267 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:40:08.012812  579944 out.go:179] * Starting "force-systemd-flag-700267" primary control-plane node in "force-systemd-flag-700267" cluster
	I1212 21:40:08.015828  579944 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:40:08.018953  579944 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:40:08.021990  579944 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:40:08.022054  579944 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 21:40:08.022062  579944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:40:08.022067  579944 cache.go:65] Caching tarball of preloaded images
	I1212 21:40:08.022196  579944 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:40:08.022207  579944 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:40:08.022331  579944 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/force-systemd-flag-700267/config.json ...
	I1212 21:40:08.022384  579944 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/force-systemd-flag-700267/config.json: {Name:mk7f7e1f54a5ef23b3947079d47027b659d2b1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:40:08.044746  579944 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:40:08.044773  579944 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:40:08.044788  579944 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:40:08.044823  579944 start.go:360] acquireMachinesLock for force-systemd-flag-700267: {Name:mk1012a32e098aebaa7feffd9209bb9273fbaaaf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:40:08.044940  579944 start.go:364] duration metric: took 89.839µs to acquireMachinesLock for "force-systemd-flag-700267"
	I1212 21:40:08.044974  579944 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-700267 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-700267 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:40:08.045046  579944 start.go:125] createHost starting for "" (driver="docker")
	I1212 21:40:08.048570  579944 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1212 21:40:08.048861  579944 start.go:159] libmachine.API.Create for "force-systemd-flag-700267" (driver="docker")
	I1212 21:40:08.048915  579944 client.go:173] LocalClient.Create starting
	I1212 21:40:08.049007  579944 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem
	I1212 21:40:08.049075  579944 main.go:143] libmachine: Decoding PEM data...
	I1212 21:40:08.049098  579944 main.go:143] libmachine: Parsing certificate...
	I1212 21:40:08.049142  579944 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem
	I1212 21:40:08.049169  579944 main.go:143] libmachine: Decoding PEM data...
	I1212 21:40:08.049182  579944 main.go:143] libmachine: Parsing certificate...
	I1212 21:40:08.049644  579944 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700267 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1212 21:40:08.068455  579944 cli_runner.go:211] docker network inspect force-systemd-flag-700267 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1212 21:40:08.068596  579944 network_create.go:284] running [docker network inspect force-systemd-flag-700267] to gather additional debugging logs...
	I1212 21:40:08.068622  579944 cli_runner.go:164] Run: docker network inspect force-systemd-flag-700267
	W1212 21:40:08.087651  579944 cli_runner.go:211] docker network inspect force-systemd-flag-700267 returned with exit code 1
	I1212 21:40:08.087688  579944 network_create.go:287] error running [docker network inspect force-systemd-flag-700267]: docker network inspect force-systemd-flag-700267: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-700267 not found
	I1212 21:40:08.087703  579944 network_create.go:289] output of [docker network inspect force-systemd-flag-700267]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-700267 not found
	
	** /stderr **
	I1212 21:40:08.087818  579944 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:40:08.105056  579944 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ff7ed303f4da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:12:49:ad:2d:4b} reservation:<nil>}
	I1212 21:40:08.105510  579944 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-2146c0dc7fc2 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:12:1c:7d:73:92:a8} reservation:<nil>}
	I1212 21:40:08.105838  579944 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ddb81b19f833 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:52:db:b0:33:e7:14} reservation:<nil>}
	I1212 21:40:08.106190  579944 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6b32de8ea760 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:be:58:82:fe:b6:b1} reservation:<nil>}
	I1212 21:40:08.106733  579944 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a5aa80}
	I1212 21:40:08.106759  579944 network_create.go:124] attempt to create docker network force-systemd-flag-700267 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1212 21:40:08.106835  579944 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-700267 force-systemd-flag-700267
	I1212 21:40:08.172650  579944 network_create.go:108] docker network force-systemd-flag-700267 192.168.85.0/24 created
	I1212 21:40:08.172688  579944 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-700267" container
	I1212 21:40:08.172778  579944 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1212 21:40:08.189294  579944 cli_runner.go:164] Run: docker volume create force-systemd-flag-700267 --label name.minikube.sigs.k8s.io=force-systemd-flag-700267 --label created_by.minikube.sigs.k8s.io=true
	I1212 21:40:08.214079  579944 oci.go:103] Successfully created a docker volume force-systemd-flag-700267
	I1212 21:40:08.214169  579944 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-700267-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-700267 --entrypoint /usr/bin/test -v force-systemd-flag-700267:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -d /var/lib
	I1212 21:40:08.758247  579944 oci.go:107] Successfully prepared a docker volume force-systemd-flag-700267
	I1212 21:40:08.758336  579944 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:40:08.758352  579944 kic.go:194] Starting extracting preloaded images to volume ...
	I1212 21:40:08.758428  579944 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-700267:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir
	I1212 21:40:12.765722  579944 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-700267:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 -I lz4 -xf /preloaded.tar -C /extractDir: (4.007248672s)
	I1212 21:40:12.765759  579944 kic.go:203] duration metric: took 4.007403521s to extract preloaded images to volume ...
	W1212 21:40:12.765910  579944 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1212 21:40:12.766023  579944 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1212 21:40:12.822706  579944 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-700267 --name force-systemd-flag-700267 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-700267 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-700267 --network force-systemd-flag-700267 --ip 192.168.85.2 --volume force-systemd-flag-700267:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138
	I1212 21:40:13.140613  579944 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700267 --format={{.State.Running}}
	I1212 21:40:13.164880  579944 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700267 --format={{.State.Status}}
	I1212 21:40:13.190205  579944 cli_runner.go:164] Run: docker exec force-systemd-flag-700267 stat /var/lib/dpkg/alternatives/iptables
	I1212 21:40:13.242207  579944 oci.go:144] the created container "force-systemd-flag-700267" has a running status.
	I1212 21:40:13.242239  579944 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa...
	I1212 21:40:13.407234  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1212 21:40:13.407294  579944 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1212 21:40:13.436648  579944 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700267 --format={{.State.Status}}
	I1212 21:40:13.460298  579944 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1212 21:40:13.460320  579944 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-700267 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1212 21:40:13.546711  579944 cli_runner.go:164] Run: docker container inspect force-systemd-flag-700267 --format={{.State.Status}}
	I1212 21:40:13.580833  579944 machine.go:94] provisionDockerMachine start ...
	I1212 21:40:13.580925  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:13.604438  579944 main.go:143] libmachine: Using SSH client type: native
	I1212 21:40:13.604818  579944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1212 21:40:13.604829  579944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:40:13.605595  579944 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1212 21:40:16.764219  579944 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-700267
	
	I1212 21:40:16.764255  579944 ubuntu.go:182] provisioning hostname "force-systemd-flag-700267"
	I1212 21:40:16.764331  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:16.782179  579944 main.go:143] libmachine: Using SSH client type: native
	I1212 21:40:16.782504  579944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1212 21:40:16.782527  579944 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-700267 && echo "force-systemd-flag-700267" | sudo tee /etc/hostname
	I1212 21:40:16.945965  579944 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-700267
	
	I1212 21:40:16.946110  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:16.968658  579944 main.go:143] libmachine: Using SSH client type: native
	I1212 21:40:16.968981  579944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1212 21:40:16.969005  579944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-700267' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-700267/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-700267' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:40:17.120692  579944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:40:17.120758  579944 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:40:17.120791  579944 ubuntu.go:190] setting up certificates
	I1212 21:40:17.120831  579944 provision.go:84] configureAuth start
	I1212 21:40:17.120958  579944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700267
	I1212 21:40:17.137993  579944 provision.go:143] copyHostCerts
	I1212 21:40:17.138052  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:40:17.138089  579944 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:40:17.138104  579944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:40:17.138187  579944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:40:17.138273  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:40:17.138290  579944 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:40:17.138294  579944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:40:17.138320  579944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:40:17.138367  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:40:17.138383  579944 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:40:17.138387  579944 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:40:17.138412  579944 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:40:17.138457  579944 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-700267 san=[127.0.0.1 192.168.85.2 force-systemd-flag-700267 localhost minikube]
	I1212 21:40:17.338611  579944 provision.go:177] copyRemoteCerts
	I1212 21:40:17.338694  579944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:40:17.338754  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:17.357738  579944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa Username:docker}
	I1212 21:40:17.466476  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1212 21:40:17.466555  579944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1212 21:40:17.484878  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1212 21:40:17.484937  579944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1212 21:40:17.503077  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1212 21:40:17.503163  579944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:40:17.522082  579944 provision.go:87] duration metric: took 401.217465ms to configureAuth
	I1212 21:40:17.522110  579944 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:40:17.522302  579944 config.go:182] Loaded profile config "force-systemd-flag-700267": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:40:17.522416  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:17.541122  579944 main.go:143] libmachine: Using SSH client type: native
	I1212 21:40:17.541447  579944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33410 <nil> <nil>}
	I1212 21:40:17.541467  579944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:40:18.168722  543793 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000129118s
	I1212 21:40:18.168757  543793 kubeadm.go:319] 
	I1212 21:40:18.168815  543793 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1212 21:40:18.168849  543793 kubeadm.go:319] 	- The kubelet is not running
	I1212 21:40:18.168974  543793 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1212 21:40:18.168999  543793 kubeadm.go:319] 
	I1212 21:40:18.169100  543793 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1212 21:40:18.169138  543793 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1212 21:40:18.169168  543793 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1212 21:40:18.169174  543793 kubeadm.go:319] 
	I1212 21:40:18.173545  543793 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1212 21:40:18.173970  543793 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1212 21:40:18.174086  543793 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1212 21:40:18.174354  543793 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1212 21:40:18.174364  543793 kubeadm.go:319] 
	I1212 21:40:18.174433  543793 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1212 21:40:18.174491  543793 kubeadm.go:403] duration metric: took 12m8.649146692s to StartCluster
	I1212 21:40:18.174528  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1212 21:40:18.174586  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1212 21:40:18.205395  543793 cri.go:89] found id: ""
	I1212 21:40:18.205411  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.205419  543793 logs.go:284] No container was found matching "kube-apiserver"
	I1212 21:40:18.205438  543793 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1212 21:40:18.205484  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1212 21:40:18.266438  543793 cri.go:89] found id: ""
	I1212 21:40:18.266460  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.266469  543793 logs.go:284] No container was found matching "etcd"
	I1212 21:40:18.266474  543793 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1212 21:40:18.266531  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1212 21:40:18.306994  543793 cri.go:89] found id: ""
	I1212 21:40:18.307017  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.307027  543793 logs.go:284] No container was found matching "coredns"
	I1212 21:40:18.307033  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1212 21:40:18.307093  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1212 21:40:18.340681  543793 cri.go:89] found id: ""
	I1212 21:40:18.340704  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.340713  543793 logs.go:284] No container was found matching "kube-scheduler"
	I1212 21:40:18.340719  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1212 21:40:18.340775  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1212 21:40:18.374180  543793 cri.go:89] found id: ""
	I1212 21:40:18.374203  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.374211  543793 logs.go:284] No container was found matching "kube-proxy"
	I1212 21:40:18.374217  543793 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1212 21:40:18.374273  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1212 21:40:18.411000  543793 cri.go:89] found id: ""
	I1212 21:40:18.411023  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.411032  543793 logs.go:284] No container was found matching "kube-controller-manager"
	I1212 21:40:18.411044  543793 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1212 21:40:18.411098  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1212 21:40:18.438624  543793 cri.go:89] found id: ""
	I1212 21:40:18.438646  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.438655  543793 logs.go:284] No container was found matching "kindnet"
	I1212 21:40:18.438660  543793 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1212 21:40:18.438715  543793 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1212 21:40:18.474576  543793 cri.go:89] found id: ""
	I1212 21:40:18.474604  543793 logs.go:282] 0 containers: []
	W1212 21:40:18.474618  543793 logs.go:284] No container was found matching "storage-provisioner"
	I1212 21:40:18.474629  543793 logs.go:123] Gathering logs for kubelet ...
	I1212 21:40:18.474647  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1212 21:40:18.558047  543793 logs.go:123] Gathering logs for dmesg ...
	I1212 21:40:18.558084  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1212 21:40:18.576827  543793 logs.go:123] Gathering logs for describe nodes ...
	I1212 21:40:18.576928  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1212 21:40:18.657911  543793 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1212 21:40:18.657936  543793 logs.go:123] Gathering logs for CRI-O ...
	I1212 21:40:18.657952  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1212 21:40:18.699088  543793 logs.go:123] Gathering logs for container status ...
	I1212 21:40:18.699127  543793 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1212 21:40:18.783938  543793 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1212 21:40:18.783994  543793 out.go:285] * 
	W1212 21:40:18.784053  543793 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:40:18.784072  543793 out.go:285] * 
	W1212 21:40:18.786302  543793 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:40:18.791587  543793 out.go:203] 
	W1212 21:40:18.793663  543793 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.0-1084-aws
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000129118s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1212 21:40:18.793710  543793 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1212 21:40:18.793732  543793 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1212 21:40:18.796803  543793 out.go:203] 
	I1212 21:40:17.850868  579944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:40:17.850931  579944 machine.go:97] duration metric: took 4.270078737s to provisionDockerMachine
	I1212 21:40:17.850950  579944 client.go:176] duration metric: took 9.802026998s to LocalClient.Create
	I1212 21:40:17.850972  579944 start.go:167] duration metric: took 9.802112168s to libmachine.API.Create "force-systemd-flag-700267"
	I1212 21:40:17.850982  579944 start.go:293] postStartSetup for "force-systemd-flag-700267" (driver="docker")
	I1212 21:40:17.850992  579944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:40:17.851057  579944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:40:17.851115  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:17.868974  579944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa Username:docker}
	I1212 21:40:17.984731  579944 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:40:17.988871  579944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:40:17.988901  579944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:40:17.988914  579944 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:40:17.988971  579944 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:40:17.989067  579944 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:40:17.989080  579944 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> /etc/ssl/certs/3648532.pem
	I1212 21:40:17.989178  579944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:40:18.002065  579944 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:40:18.026183  579944 start.go:296] duration metric: took 175.185091ms for postStartSetup
	I1212 21:40:18.026619  579944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700267
	I1212 21:40:18.044601  579944 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/force-systemd-flag-700267/config.json ...
	I1212 21:40:18.044898  579944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:40:18.044951  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:18.063228  579944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa Username:docker}
	I1212 21:40:18.176927  579944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:40:18.182771  579944 start.go:128] duration metric: took 10.13771024s to createHost
	I1212 21:40:18.182802  579944 start.go:83] releasing machines lock for "force-systemd-flag-700267", held for 10.137846233s
	I1212 21:40:18.182881  579944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-700267
	I1212 21:40:18.205177  579944 ssh_runner.go:195] Run: cat /version.json
	I1212 21:40:18.205211  579944 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:40:18.205231  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:18.205271  579944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-700267
	I1212 21:40:18.223831  579944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa Username:docker}
	I1212 21:40:18.254853  579944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33410 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/force-systemd-flag-700267/id_rsa Username:docker}
	I1212 21:40:18.336611  579944 ssh_runner.go:195] Run: systemctl --version
	I1212 21:40:18.436055  579944 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:40:18.486664  579944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:40:18.492056  579944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:40:18.492157  579944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:40:18.534799  579944 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1212 21:40:18.534835  579944 start.go:496] detecting cgroup driver to use...
	I1212 21:40:18.534879  579944 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1212 21:40:18.534986  579944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:40:18.570180  579944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:40:18.586495  579944 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:40:18.586572  579944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:40:18.604944  579944 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:40:18.626923  579944 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:40:18.837722  579944 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:40:19.011544  579944 docker.go:234] disabling docker service ...
	I1212 21:40:19.011639  579944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:40:19.044489  579944 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:40:19.067221  579944 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:40:19.228358  579944 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:40:19.392239  579944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:40:19.406597  579944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:40:19.431903  579944 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:40:19.431975  579944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:40:19.449318  579944 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1212 21:40:19.449390  579944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:40:19.478324  579944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:40:19.498222  579944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:40:19.511192  579944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:40:19.531982  579944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:40:19.544029  579944 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:40:19.565600  579944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:40:19.577503  579944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:40:19.590798  579944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:40:19.599700  579944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:40:19.788754  579944 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:40:19.996820  579944 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:40:19.996893  579944 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:40:20.012930  579944 start.go:564] Will wait 60s for crictl version
	I1212 21:40:20.013046  579944 ssh_runner.go:195] Run: which crictl
	I1212 21:40:20.018119  579944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:40:20.055037  579944 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:40:20.055128  579944 ssh_runner.go:195] Run: crio --version
	I1212 21:40:20.096878  579944 ssh_runner.go:195] Run: crio --version
	I1212 21:40:20.143636  579944 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565354076Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565387635Z" level=info msg="Starting seccomp notifier watcher"
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565423615Z" level=info msg="Create NRI interface"
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565515866Z" level=info msg="built-in NRI default validator is disabled"
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565524235Z" level=info msg="runtime interface created"
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.56553514Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565541458Z" level=info msg="runtime interface starting up..."
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565548178Z" level=info msg="starting plugins..."
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565561282Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 12 21:28:03 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:28:03.565619916Z" level=info msg="No systemd watchdog enabled"
	Dec 12 21:28:03 kubernetes-upgrade-905307 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	Dec 12 21:32:14 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:32:14.578183805Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=c68db48b-0288-467a-96a1-74c78ac831f4 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:32:14 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:32:14.578889018Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=13f99777-a7af-40db-a438-39a9f5f7bae2 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:32:14 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:32:14.579398078Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=c10796c7-96f1-49d9-a0bd-ad4c17c8fdfa name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:32:14 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:32:14.579829508Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=6c0f13b5-3f80-430e-a433-8d05c99a13b0 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:32:14 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:32:14.580266542Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=3515fcee-4b4d-469c-8187-33c5ee9b62fd name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:32:14 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:32:14.581015538Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=5307602e-26c5-463c-a54c-8147fd0d60ea name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:32:14 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:32:14.58141363Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=f839a4e1-1260-4dc6-abf9-d94a575dfa5c name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:36:16 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:36:16.845455523Z" level=info msg="Checking image status: registry.k8s.io/kube-apiserver:v1.35.0-beta.0" id=a157c3e8-06d0-4d03-9024-11e4f530522e name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:36:16 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:36:16.846180814Z" level=info msg="Checking image status: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" id=6f661470-2c8b-4ab6-85fa-cb58d9b29c77 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:36:16 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:36:16.846784069Z" level=info msg="Checking image status: registry.k8s.io/kube-scheduler:v1.35.0-beta.0" id=f558d50c-75d2-421a-9e33-d71a25f8da88 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:36:16 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:36:16.847342483Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-beta.0" id=730e5861-5cd0-4bdd-a023-9ac9625096e3 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:36:16 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:36:16.847942424Z" level=info msg="Checking image status: registry.k8s.io/coredns/coredns:v1.13.1" id=791daa2f-bc96-4fce-a8bc-7d3826e536a3 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:36:16 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:36:16.848630316Z" level=info msg="Checking image status: registry.k8s.io/pause:3.10.1" id=87563e54-11cb-476c-b0a0-b4d363933de7 name=/runtime.v1.ImageService/ImageStatus
	Dec 12 21:36:16 kubernetes-upgrade-905307 crio[618]: time="2025-12-12T21:36:16.850733877Z" level=info msg="Checking image status: registry.k8s.io/etcd:3.6.5-0" id=41de8843-7552-4b71-acbb-d6942c0e15b4 name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	[Dec12 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.613273] overlayfs: idmapped layers are currently not supported
	[Dec12 21:06] overlayfs: idmapped layers are currently not supported
	[Dec12 21:07] overlayfs: idmapped layers are currently not supported
	[ +26.617506] overlayfs: idmapped layers are currently not supported
	[Dec12 21:09] overlayfs: idmapped layers are currently not supported
	[Dec12 21:13] overlayfs: idmapped layers are currently not supported
	[Dec12 21:14] overlayfs: idmapped layers are currently not supported
	[Dec12 21:15] overlayfs: idmapped layers are currently not supported
	[Dec12 21:16] overlayfs: idmapped layers are currently not supported
	[Dec12 21:17] overlayfs: idmapped layers are currently not supported
	[Dec12 21:19] overlayfs: idmapped layers are currently not supported
	[ +26.409125] overlayfs: idmapped layers are currently not supported
	[Dec12 21:20] overlayfs: idmapped layers are currently not supported
	[ +45.357391] overlayfs: idmapped layers are currently not supported
	[Dec12 21:21] overlayfs: idmapped layers are currently not supported
	[ +55.414462] overlayfs: idmapped layers are currently not supported
	[Dec12 21:22] overlayfs: idmapped layers are currently not supported
	[Dec12 21:23] overlayfs: idmapped layers are currently not supported
	[Dec12 21:24] overlayfs: idmapped layers are currently not supported
	[Dec12 21:26] overlayfs: idmapped layers are currently not supported
	[Dec12 21:27] overlayfs: idmapped layers are currently not supported
	[Dec12 21:38] overlayfs: idmapped layers are currently not supported
	
	
	==> kernel <==
	 21:40:21 up  4:22,  0 user,  load average: 2.54, 1.68, 1.58
	Linux kubernetes-upgrade-905307 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 12 21:40:18 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:40:19 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 966.
	Dec 12 21:40:19 kubernetes-upgrade-905307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:40:19 kubernetes-upgrade-905307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:40:19 kubernetes-upgrade-905307 kubelet[12330]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 21:40:19 kubernetes-upgrade-905307 kubelet[12330]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 21:40:19 kubernetes-upgrade-905307 kubelet[12330]: E1212 21:40:19.565193   12330 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:40:19 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:40:19 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 967.
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:40:20 kubernetes-upgrade-905307 kubelet[12351]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 21:40:20 kubernetes-upgrade-905307 kubelet[12351]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 21:40:20 kubernetes-upgrade-905307 kubelet[12351]: E1212 21:40:20.299509   12351 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 968.
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:40:20 kubernetes-upgrade-905307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 12 21:40:21 kubernetes-upgrade-905307 kubelet[12440]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 21:40:21 kubernetes-upgrade-905307 kubelet[12440]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Dec 12 21:40:21 kubernetes-upgrade-905307 kubelet[12440]: E1212 21:40:21.095631   12440 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 12 21:40:21 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 12 21:40:21 kubernetes-upgrade-905307 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-905307 -n kubernetes-upgrade-905307
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p kubernetes-upgrade-905307 -n kubernetes-upgrade-905307: exit status 2 (487.244574ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-905307" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-905307" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-905307
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-905307: (2.498915021s)
--- FAIL: TestKubernetesUpgrade (793.51s)

                                                
                                    
x
+
TestPause/serial/Pause (6.99s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-634913 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-arm64 pause -p pause-634913 --alsologtostderr -v=5: exit status 80 (2.175068943s)

                                                
                                                
-- stdout --
	* Pausing node pause-634913 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:39:58.397251  578551 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:39:58.397877  578551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:39:58.397913  578551 out.go:374] Setting ErrFile to fd 2...
	I1212 21:39:58.397933  578551 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:39:58.398236  578551 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:39:58.398562  578551 out.go:368] Setting JSON to false
	I1212 21:39:58.398614  578551 mustload.go:66] Loading cluster: pause-634913
	I1212 21:39:58.399140  578551 config.go:182] Loaded profile config "pause-634913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:39:58.399669  578551 cli_runner.go:164] Run: docker container inspect pause-634913 --format={{.State.Status}}
	I1212 21:39:58.416569  578551 host.go:66] Checking if "pause-634913" exists ...
	I1212 21:39:58.417339  578551 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:39:58.475966  578551 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:39:58.465652236 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:39:58.476668  578551 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22112/minikube-v1.37.0-1765505725-22112-arm64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765505725-22112/minikube-v1.37.0-1765505725-22112-arm64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765505725-22112-arm64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-634913 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1212 21:39:58.479775  578551 out.go:179] * Pausing node pause-634913 ... 
	I1212 21:39:58.483501  578551 host.go:66] Checking if "pause-634913" exists ...
	I1212 21:39:58.483855  578551 ssh_runner.go:195] Run: systemctl --version
	I1212 21:39:58.483905  578551 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:58.500928  578551 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:58.606922  578551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:39:58.619856  578551 pause.go:52] kubelet running: true
	I1212 21:39:58.619980  578551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 21:39:58.843599  578551 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 21:39:58.843690  578551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 21:39:58.920629  578551 cri.go:89] found id: "04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49"
	I1212 21:39:58.920654  578551 cri.go:89] found id: "9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9"
	I1212 21:39:58.920658  578551 cri.go:89] found id: "089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a"
	I1212 21:39:58.920662  578551 cri.go:89] found id: "dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39"
	I1212 21:39:58.920665  578551 cri.go:89] found id: "a1104339af1705164883992605c6d239c40cc3820c8e583be873847c54a5fdaf"
	I1212 21:39:58.920669  578551 cri.go:89] found id: "15f89e22e178c3af539227dee8fefc1f4883f34271b0332253a4f08509f0c879"
	I1212 21:39:58.920672  578551 cri.go:89] found id: "9058d5f4ba61315fb9ccc3747fdcbdafa77a1fc71db2d8f2d8363f05cef61c0a"
	I1212 21:39:58.920675  578551 cri.go:89] found id: "af50e0432616e5578e5a23191899d072d7fe0365c765d16d22ac3d347d2d9094"
	I1212 21:39:58.920678  578551 cri.go:89] found id: "65c4628f15cc95be90c08bf36a69f6cb1b76eee884867060f1e71c3247881865"
	I1212 21:39:58.920685  578551 cri.go:89] found id: "a4e451f1e032b800e0cc40a399f4d41663317a67b1b0f6595a114ab35837c89a"
	I1212 21:39:58.920689  578551 cri.go:89] found id: "d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4"
	I1212 21:39:58.920692  578551 cri.go:89] found id: "28b7c58c880a7ef000a9e96201895fc8fa32c3ff7f5ec9ac86b3ccc8f337cb63"
	I1212 21:39:58.920695  578551 cri.go:89] found id: "76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe"
	I1212 21:39:58.920699  578551 cri.go:89] found id: "524fbadbdc48e9b2e13c860539bd61a4c7b22c37117dc771d7b59db12046b288"
	I1212 21:39:58.920701  578551 cri.go:89] found id: ""
	I1212 21:39:58.920753  578551 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 21:39:58.931834  578551 retry.go:31] will retry after 245.569112ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:39:58Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:39:59.178351  578551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:39:59.191887  578551 pause.go:52] kubelet running: false
	I1212 21:39:59.191976  578551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 21:39:59.326563  578551 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 21:39:59.326640  578551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 21:39:59.399362  578551 cri.go:89] found id: "04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49"
	I1212 21:39:59.399437  578551 cri.go:89] found id: "9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9"
	I1212 21:39:59.399446  578551 cri.go:89] found id: "089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a"
	I1212 21:39:59.399450  578551 cri.go:89] found id: "dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39"
	I1212 21:39:59.399454  578551 cri.go:89] found id: "a1104339af1705164883992605c6d239c40cc3820c8e583be873847c54a5fdaf"
	I1212 21:39:59.399457  578551 cri.go:89] found id: "15f89e22e178c3af539227dee8fefc1f4883f34271b0332253a4f08509f0c879"
	I1212 21:39:59.399460  578551 cri.go:89] found id: "9058d5f4ba61315fb9ccc3747fdcbdafa77a1fc71db2d8f2d8363f05cef61c0a"
	I1212 21:39:59.399464  578551 cri.go:89] found id: "af50e0432616e5578e5a23191899d072d7fe0365c765d16d22ac3d347d2d9094"
	I1212 21:39:59.399467  578551 cri.go:89] found id: "65c4628f15cc95be90c08bf36a69f6cb1b76eee884867060f1e71c3247881865"
	I1212 21:39:59.399486  578551 cri.go:89] found id: "a4e451f1e032b800e0cc40a399f4d41663317a67b1b0f6595a114ab35837c89a"
	I1212 21:39:59.399489  578551 cri.go:89] found id: "d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4"
	I1212 21:39:59.399493  578551 cri.go:89] found id: "28b7c58c880a7ef000a9e96201895fc8fa32c3ff7f5ec9ac86b3ccc8f337cb63"
	I1212 21:39:59.399496  578551 cri.go:89] found id: "76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe"
	I1212 21:39:59.399499  578551 cri.go:89] found id: "524fbadbdc48e9b2e13c860539bd61a4c7b22c37117dc771d7b59db12046b288"
	I1212 21:39:59.399501  578551 cri.go:89] found id: ""
	I1212 21:39:59.399549  578551 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 21:39:59.410047  578551 retry.go:31] will retry after 463.446407ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:39:59Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:39:59.873764  578551 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:39:59.887269  578551 pause.go:52] kubelet running: false
	I1212 21:39:59.887350  578551 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1212 21:40:00.168681  578551 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1212 21:40:00.168785  578551 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1212 21:40:00.380964  578551 cri.go:89] found id: "04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49"
	I1212 21:40:00.380995  578551 cri.go:89] found id: "9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9"
	I1212 21:40:00.381000  578551 cri.go:89] found id: "089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a"
	I1212 21:40:00.381004  578551 cri.go:89] found id: "dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39"
	I1212 21:40:00.381008  578551 cri.go:89] found id: "a1104339af1705164883992605c6d239c40cc3820c8e583be873847c54a5fdaf"
	I1212 21:40:00.381017  578551 cri.go:89] found id: "15f89e22e178c3af539227dee8fefc1f4883f34271b0332253a4f08509f0c879"
	I1212 21:40:00.381020  578551 cri.go:89] found id: "9058d5f4ba61315fb9ccc3747fdcbdafa77a1fc71db2d8f2d8363f05cef61c0a"
	I1212 21:40:00.381023  578551 cri.go:89] found id: "af50e0432616e5578e5a23191899d072d7fe0365c765d16d22ac3d347d2d9094"
	I1212 21:40:00.381028  578551 cri.go:89] found id: "65c4628f15cc95be90c08bf36a69f6cb1b76eee884867060f1e71c3247881865"
	I1212 21:40:00.381035  578551 cri.go:89] found id: "a4e451f1e032b800e0cc40a399f4d41663317a67b1b0f6595a114ab35837c89a"
	I1212 21:40:00.381038  578551 cri.go:89] found id: "d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4"
	I1212 21:40:00.381043  578551 cri.go:89] found id: "28b7c58c880a7ef000a9e96201895fc8fa32c3ff7f5ec9ac86b3ccc8f337cb63"
	I1212 21:40:00.381046  578551 cri.go:89] found id: "76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe"
	I1212 21:40:00.381051  578551 cri.go:89] found id: "524fbadbdc48e9b2e13c860539bd61a4c7b22c37117dc771d7b59db12046b288"
	I1212 21:40:00.381054  578551 cri.go:89] found id: ""
	I1212 21:40:00.381113  578551 ssh_runner.go:195] Run: sudo runc list -f json
	I1212 21:40:00.466822  578551 out.go:203] 
	W1212 21:40:00.485493  578551 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:40:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:40:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1212 21:40:00.485717  578551 out.go:285] * 
	* 
	W1212 21:40:00.494958  578551 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1212 21:40:00.502224  578551 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-arm64 pause -p pause-634913 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-634913
helpers_test.go:244: (dbg) docker inspect pause-634913:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe",
	        "Created": "2025-12-12T21:38:14.803480018Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 574676,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:38:14.871244197Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/hosts",
	        "LogPath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe-json.log",
	        "Name": "/pause-634913",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-634913:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-634913",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe",
	                "LowerDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-634913",
	                "Source": "/var/lib/docker/volumes/pause-634913/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-634913",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-634913",
	                "name.minikube.sigs.k8s.io": "pause-634913",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d75ae54978d15c8f39fccc5fd8b23ddab16ed57cc8669bc0d0362a5d8dabc5e2",
	            "SandboxKey": "/var/run/docker/netns/d75ae54978d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-634913": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:87:cb:a4:bd:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14883b96389dfc4343f0d238b8addc8114fd0d727d68ca1ef9c8cbda3610474e",
	                    "EndpointID": "16080da94973d2f7f11393e9cf527a167ecfec5081ed61cd4f568800de0d7ee3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-634913",
	                        "8ba8b226fc61"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-634913 -n pause-634913
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-634913 -n pause-634913: exit status 2 (413.669003ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-634913 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-634913 logs -n 25: (1.460530447s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-406866 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:25 UTC │ 12 Dec 25 21:26 UTC │
	│ start   │ -p missing-upgrade-992322 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-992322    │ jenkins │ v1.35.0 │ 12 Dec 25 21:25 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ delete  │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-406866 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │                     │
	│ stop    │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p NoKubernetes-406866 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p missing-upgrade-992322 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-992322    │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ ssh     │ -p NoKubernetes-406866 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │                     │
	│ delete  │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ delete  │ -p missing-upgrade-992322                                                                                                                       │ missing-upgrade-992322    │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ stop    │ -p kubernetes-upgrade-905307                                                                                                                    │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p stopped-upgrade-302169 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-302169    │ jenkins │ v1.35.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:28 UTC │
	│ start   │ -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │                     │
	│ stop    │ stopped-upgrade-302169 stop                                                                                                                     │ stopped-upgrade-302169    │ jenkins │ v1.35.0 │ 12 Dec 25 21:28 UTC │ 12 Dec 25 21:28 UTC │
	│ start   │ -p stopped-upgrade-302169 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-302169    │ jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │ 12 Dec 25 21:32 UTC │
	│ delete  │ -p stopped-upgrade-302169                                                                                                                       │ stopped-upgrade-302169    │ jenkins │ v1.37.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:33 UTC │
	│ start   │ -p running-upgrade-649209 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-649209    │ jenkins │ v1.35.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:33 UTC │
	│ start   │ -p running-upgrade-649209 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-649209    │ jenkins │ v1.37.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:38 UTC │
	│ delete  │ -p running-upgrade-649209                                                                                                                       │ running-upgrade-649209    │ jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ start   │ -p pause-634913 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:39 UTC │
	│ start   │ -p pause-634913 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │ 12 Dec 25 21:39 UTC │
	│ pause   │ -p pause-634913 --alsologtostderr -v=5                                                                                                          │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:39:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:39:30.809484  577249 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:39:30.809600  577249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:39:30.809612  577249 out.go:374] Setting ErrFile to fd 2...
	I1212 21:39:30.809617  577249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:39:30.809874  577249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:39:30.810268  577249 out.go:368] Setting JSON to false
	I1212 21:39:30.811282  577249 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15723,"bootTime":1765559848,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:39:30.811355  577249 start.go:143] virtualization:  
	I1212 21:39:30.813376  577249 out.go:179] * [pause-634913] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:39:30.814911  577249 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:39:30.814970  577249 notify.go:221] Checking for updates...
	I1212 21:39:30.818583  577249 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:39:30.820824  577249 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:39:30.822101  577249 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:39:30.823274  577249 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:39:30.824501  577249 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:39:30.826205  577249 config.go:182] Loaded profile config "pause-634913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:39:30.826765  577249 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:39:30.867058  577249 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:39:30.867184  577249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:39:30.929343  577249 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:39:30.919955353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:39:30.929461  577249 docker.go:319] overlay module found
	I1212 21:39:30.930849  577249 out.go:179] * Using the docker driver based on existing profile
	I1212 21:39:30.932043  577249 start.go:309] selected driver: docker
	I1212 21:39:30.932065  577249 start.go:927] validating driver "docker" against &{Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:39:30.932212  577249 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:39:30.932357  577249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:39:30.989228  577249 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:39:30.979985251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:39:30.989673  577249 cni.go:84] Creating CNI manager for ""
	I1212 21:39:30.989745  577249 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 21:39:30.989800  577249 start.go:353] cluster config:
	{Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:39:30.991252  577249 out.go:179] * Starting "pause-634913" primary control-plane node in "pause-634913" cluster
	I1212 21:39:30.992362  577249 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:39:30.993789  577249 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:39:30.994953  577249 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:39:30.995019  577249 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 21:39:30.995032  577249 cache.go:65] Caching tarball of preloaded images
	I1212 21:39:30.995028  577249 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:39:30.995117  577249 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:39:30.995128  577249 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:39:30.995262  577249 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/config.json ...
	I1212 21:39:31.017959  577249 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:39:31.017987  577249 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:39:31.018004  577249 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:39:31.018038  577249 start.go:360] acquireMachinesLock for pause-634913: {Name:mk73b6f645c53f163db55925e2dc12b1ddc178e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:39:31.018110  577249 start.go:364] duration metric: took 49.255µs to acquireMachinesLock for "pause-634913"
	I1212 21:39:31.018136  577249 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:39:31.018146  577249 fix.go:54] fixHost starting: 
	I1212 21:39:31.018435  577249 cli_runner.go:164] Run: docker container inspect pause-634913 --format={{.State.Status}}
	I1212 21:39:31.036463  577249 fix.go:112] recreateIfNeeded on pause-634913: state=Running err=<nil>
	W1212 21:39:31.036493  577249 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:39:31.037981  577249 out.go:252] * Updating the running docker "pause-634913" container ...
	I1212 21:39:31.038008  577249 machine.go:94] provisionDockerMachine start ...
	I1212 21:39:31.038088  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.056656  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:31.056991  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:31.057007  577249 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:39:31.213459  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-634913
	
	I1212 21:39:31.213495  577249 ubuntu.go:182] provisioning hostname "pause-634913"
	I1212 21:39:31.213562  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.234125  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:31.234444  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:31.234456  577249 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-634913 && echo "pause-634913" | sudo tee /etc/hostname
	I1212 21:39:31.403462  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-634913
	
	I1212 21:39:31.403600  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.421380  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:31.421732  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:31.421808  577249 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-634913' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-634913/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-634913' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:39:31.577338  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:39:31.577420  577249 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:39:31.577454  577249 ubuntu.go:190] setting up certificates
	I1212 21:39:31.577499  577249 provision.go:84] configureAuth start
	I1212 21:39:31.577592  577249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-634913
	I1212 21:39:31.597918  577249 provision.go:143] copyHostCerts
	I1212 21:39:31.598010  577249 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:39:31.598026  577249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:39:31.598108  577249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:39:31.598216  577249 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:39:31.598228  577249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:39:31.598255  577249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:39:31.598318  577249 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:39:31.598326  577249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:39:31.598352  577249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:39:31.598411  577249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.pause-634913 san=[127.0.0.1 192.168.85.2 localhost minikube pause-634913]
	I1212 21:39:31.818076  577249 provision.go:177] copyRemoteCerts
	I1212 21:39:31.818148  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:39:31.818196  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.835957  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:31.944677  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:39:31.963823  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 21:39:31.982352  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:39:32.003325  577249 provision.go:87] duration metric: took 425.788858ms to configureAuth
	I1212 21:39:32.003355  577249 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:39:32.003628  577249 config.go:182] Loaded profile config "pause-634913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:39:32.003757  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:32.026138  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:32.026523  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:32.026542  577249 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:39:37.399765  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:39:37.399791  577249 machine.go:97] duration metric: took 6.361774093s to provisionDockerMachine
	I1212 21:39:37.399804  577249 start.go:293] postStartSetup for "pause-634913" (driver="docker")
	I1212 21:39:37.399814  577249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:39:37.399890  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:39:37.399940  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.416513  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.524735  577249 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:39:37.528256  577249 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:39:37.528291  577249 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:39:37.528303  577249 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:39:37.528360  577249 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:39:37.528481  577249 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:39:37.528600  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:39:37.536306  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:39:37.554802  577249 start.go:296] duration metric: took 154.981537ms for postStartSetup
	I1212 21:39:37.554935  577249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:39:37.554995  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.572691  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.677773  577249 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:39:37.683008  577249 fix.go:56] duration metric: took 6.664854128s for fixHost
	I1212 21:39:37.683036  577249 start.go:83] releasing machines lock for "pause-634913", held for 6.664911638s
	I1212 21:39:37.683105  577249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-634913
	I1212 21:39:37.700075  577249 ssh_runner.go:195] Run: cat /version.json
	I1212 21:39:37.700134  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.700151  577249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:39:37.700227  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.718872  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.722554  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.913578  577249 ssh_runner.go:195] Run: systemctl --version
	I1212 21:39:37.920181  577249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:39:37.966987  577249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:39:37.973947  577249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:39:37.974067  577249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:39:37.984236  577249 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:39:37.984312  577249 start.go:496] detecting cgroup driver to use...
	I1212 21:39:37.984361  577249 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:39:37.984491  577249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:39:38.001920  577249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:39:38.020764  577249 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:39:38.020895  577249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:39:38.039677  577249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:39:38.054885  577249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:39:38.194605  577249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:39:38.325882  577249 docker.go:234] disabling docker service ...
	I1212 21:39:38.325945  577249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:39:38.341104  577249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:39:38.354783  577249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:39:38.482076  577249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:39:38.620179  577249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:39:38.633328  577249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:39:38.648551  577249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:39:38.648628  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.657646  577249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:39:38.657797  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.666883  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.676668  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.685748  577249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:39:38.694679  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.706301  577249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.715500  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.724762  577249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:39:38.733383  577249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:39:38.742022  577249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:39:38.886277  577249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:39:39.090096  577249 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:39:39.090218  577249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:39:39.094552  577249 start.go:564] Will wait 60s for crictl version
	I1212 21:39:39.094662  577249 ssh_runner.go:195] Run: which crictl
	I1212 21:39:39.098546  577249 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:39:39.124659  577249 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:39:39.124819  577249 ssh_runner.go:195] Run: crio --version
	I1212 21:39:39.153670  577249 ssh_runner.go:195] Run: crio --version
	I1212 21:39:39.184591  577249 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:39:39.185907  577249 cli_runner.go:164] Run: docker network inspect pause-634913 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:39:39.202698  577249 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 21:39:39.206986  577249 kubeadm.go:884] updating cluster {Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:39:39.207144  577249 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:39:39.207203  577249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:39:39.243951  577249 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:39:39.243977  577249 crio.go:433] Images already preloaded, skipping extraction
	I1212 21:39:39.244031  577249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:39:39.269474  577249 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:39:39.269497  577249 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:39:39.269504  577249 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 21:39:39.269649  577249 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-634913 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:39:39.269730  577249 ssh_runner.go:195] Run: crio config
	I1212 21:39:39.322333  577249 cni.go:84] Creating CNI manager for ""
	I1212 21:39:39.322359  577249 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 21:39:39.322379  577249 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:39:39.322439  577249 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-634913 NodeName:pause-634913 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:39:39.322582  577249 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-634913"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:39:39.322664  577249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:39:39.330438  577249 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:39:39.330505  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:39:39.338001  577249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1212 21:39:39.351028  577249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:39:39.364462  577249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1212 21:39:39.377177  577249 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:39:39.380993  577249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:39:39.526425  577249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:39:39.540091  577249 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913 for IP: 192.168.85.2
	I1212 21:39:39.540113  577249 certs.go:195] generating shared ca certs ...
	I1212 21:39:39.540129  577249 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:39:39.540341  577249 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:39:39.540810  577249 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:39:39.540853  577249 certs.go:257] generating profile certs ...
	I1212 21:39:39.540979  577249 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.key
	I1212 21:39:39.541099  577249 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/apiserver.key.9f95ce7c
	I1212 21:39:39.541172  577249 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/proxy-client.key
	I1212 21:39:39.541305  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:39:39.541368  577249 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:39:39.541384  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:39:39.541428  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:39:39.541479  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:39:39.541512  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:39:39.541581  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:39:39.542376  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:39:39.565016  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:39:39.582649  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:39:39.601049  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:39:39.619156  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 21:39:39.637086  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:39:39.654930  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:39:39.673442  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:39:39.691064  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:39:39.708690  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:39:39.726449  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:39:39.744154  577249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:39:39.757437  577249 ssh_runner.go:195] Run: openssl version
	I1212 21:39:39.763744  577249 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.771346  577249 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:39:39.780409  577249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.784308  577249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.784402  577249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.825147  577249 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:39:39.832845  577249 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.840416  577249 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:39:39.847939  577249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.851682  577249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.851767  577249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.894187  577249 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:39:39.902902  577249 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.911162  577249 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:39:39.920751  577249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.933391  577249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.933483  577249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.994405  577249 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:39:40.013547  577249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:39:40.031818  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:39:40.124058  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:39:40.253293  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:39:40.354293  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:39:40.408639  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:39:40.461073  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:39:40.533290  577249 kubeadm.go:401] StartCluster: {Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:39:40.533428  577249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:39:40.533500  577249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:39:40.566630  577249 cri.go:89] found id: "04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49"
	I1212 21:39:40.566670  577249 cri.go:89] found id: "9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9"
	I1212 21:39:40.566675  577249 cri.go:89] found id: "089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a"
	I1212 21:39:40.566679  577249 cri.go:89] found id: "dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39"
	I1212 21:39:40.566682  577249 cri.go:89] found id: "a1104339af1705164883992605c6d239c40cc3820c8e583be873847c54a5fdaf"
	I1212 21:39:40.566689  577249 cri.go:89] found id: "15f89e22e178c3af539227dee8fefc1f4883f34271b0332253a4f08509f0c879"
	I1212 21:39:40.566695  577249 cri.go:89] found id: "9058d5f4ba61315fb9ccc3747fdcbdafa77a1fc71db2d8f2d8363f05cef61c0a"
	I1212 21:39:40.566698  577249 cri.go:89] found id: "af50e0432616e5578e5a23191899d072d7fe0365c765d16d22ac3d347d2d9094"
	I1212 21:39:40.566701  577249 cri.go:89] found id: "65c4628f15cc95be90c08bf36a69f6cb1b76eee884867060f1e71c3247881865"
	I1212 21:39:40.566709  577249 cri.go:89] found id: "a4e451f1e032b800e0cc40a399f4d41663317a67b1b0f6595a114ab35837c89a"
	I1212 21:39:40.566715  577249 cri.go:89] found id: "d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4"
	I1212 21:39:40.566718  577249 cri.go:89] found id: "28b7c58c880a7ef000a9e96201895fc8fa32c3ff7f5ec9ac86b3ccc8f337cb63"
	I1212 21:39:40.566736  577249 cri.go:89] found id: "76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe"
	I1212 21:39:40.566741  577249 cri.go:89] found id: "524fbadbdc48e9b2e13c860539bd61a4c7b22c37117dc771d7b59db12046b288"
	I1212 21:39:40.566744  577249 cri.go:89] found id: ""
	I1212 21:39:40.566795  577249 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 21:39:40.584257  577249 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:39:40Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:39:40.584345  577249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:39:40.600078  577249 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:39:40.600180  577249 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:39:40.600338  577249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:39:40.613589  577249 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:39:40.614312  577249 kubeconfig.go:125] found "pause-634913" server: "https://192.168.85.2:8443"
	I1212 21:39:40.615214  577249 kapi.go:59] client config for pause-634913: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:39:40.615967  577249 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:39:40.616053  577249 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:39:40.616071  577249 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:39:40.616078  577249 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:39:40.616087  577249 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:39:40.616508  577249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:39:40.625231  577249 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1212 21:39:40.625268  577249 kubeadm.go:602] duration metric: took 25.081439ms to restartPrimaryControlPlane
	I1212 21:39:40.625278  577249 kubeadm.go:403] duration metric: took 91.999151ms to StartCluster
	I1212 21:39:40.625301  577249 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:39:40.625373  577249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:39:40.626354  577249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:39:40.626621  577249 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:39:40.627018  577249 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:39:40.627162  577249 config.go:182] Loaded profile config "pause-634913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:39:40.628626  577249 out.go:179] * Verifying Kubernetes components...
	I1212 21:39:40.628716  577249 out.go:179] * Enabled addons: 
	I1212 21:39:40.630060  577249 addons.go:530] duration metric: took 3.050246ms for enable addons: enabled=[]
	I1212 21:39:40.630105  577249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:39:40.905586  577249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:39:40.929667  577249 node_ready.go:35] waiting up to 6m0s for node "pause-634913" to be "Ready" ...
	I1212 21:39:44.779470  577249 node_ready.go:49] node "pause-634913" is "Ready"
	I1212 21:39:44.779547  577249 node_ready.go:38] duration metric: took 3.849800878s for node "pause-634913" to be "Ready" ...
	I1212 21:39:44.779576  577249 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:39:44.779666  577249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:39:44.801185  577249 api_server.go:72] duration metric: took 4.174528594s to wait for apiserver process to appear ...
	I1212 21:39:44.801223  577249 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:39:44.801243  577249 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 21:39:44.810977  577249 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:39:44.811001  577249 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:39:45.301643  577249 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 21:39:45.310762  577249 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:39:45.310915  577249 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:39:45.801440  577249 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 21:39:45.811829  577249 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1212 21:39:45.813016  577249 api_server.go:141] control plane version: v1.34.2
	I1212 21:39:45.813057  577249 api_server.go:131] duration metric: took 1.011817524s to wait for apiserver health ...
	I1212 21:39:45.813067  577249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:39:45.817122  577249 system_pods.go:59] 7 kube-system pods found
	I1212 21:39:45.817164  577249 system_pods.go:61] "coredns-66bc5c9577-ckvjv" [97f3d46a-98b9-449a-b0fa-f44cf663939d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:39:45.817174  577249 system_pods.go:61] "etcd-pause-634913" [b83ea0c9-c3a1-4553-a1da-a480eaf6ef7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:39:45.817180  577249 system_pods.go:61] "kindnet-klcm9" [81d3d855-8a49-4ca3-af27-1694f53c05c6] Running
	I1212 21:39:45.817213  577249 system_pods.go:61] "kube-apiserver-pause-634913" [d21b89ed-4429-4123-8a19-b9e65599bdfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:39:45.817228  577249 system_pods.go:61] "kube-controller-manager-pause-634913" [28cfc299-989a-4f68-bdb9-afe7bfbb8989] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:39:45.817233  577249 system_pods.go:61] "kube-proxy-6qbl7" [730f0fa7-551b-4674-ab46-dafb588f985c] Running
	I1212 21:39:45.817250  577249 system_pods.go:61] "kube-scheduler-pause-634913" [daf26951-3915-4572-9596-b005274b696e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:39:45.817256  577249 system_pods.go:74] duration metric: took 4.18359ms to wait for pod list to return data ...
	I1212 21:39:45.817269  577249 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:39:45.819946  577249 default_sa.go:45] found service account: "default"
	I1212 21:39:45.819973  577249 default_sa.go:55] duration metric: took 2.677294ms for default service account to be created ...
	I1212 21:39:45.819992  577249 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:39:45.917611  577249 system_pods.go:86] 7 kube-system pods found
	I1212 21:39:45.917648  577249 system_pods.go:89] "coredns-66bc5c9577-ckvjv" [97f3d46a-98b9-449a-b0fa-f44cf663939d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:39:45.917658  577249 system_pods.go:89] "etcd-pause-634913" [b83ea0c9-c3a1-4553-a1da-a480eaf6ef7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:39:45.917688  577249 system_pods.go:89] "kindnet-klcm9" [81d3d855-8a49-4ca3-af27-1694f53c05c6] Running
	I1212 21:39:45.917703  577249 system_pods.go:89] "kube-apiserver-pause-634913" [d21b89ed-4429-4123-8a19-b9e65599bdfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:39:45.917711  577249 system_pods.go:89] "kube-controller-manager-pause-634913" [28cfc299-989a-4f68-bdb9-afe7bfbb8989] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:39:45.917716  577249 system_pods.go:89] "kube-proxy-6qbl7" [730f0fa7-551b-4674-ab46-dafb588f985c] Running
	I1212 21:39:45.917727  577249 system_pods.go:89] "kube-scheduler-pause-634913" [daf26951-3915-4572-9596-b005274b696e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:39:45.917734  577249 system_pods.go:126] duration metric: took 97.722046ms to wait for k8s-apps to be running ...
	I1212 21:39:45.917747  577249 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:39:45.917822  577249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:39:45.931662  577249 system_svc.go:56] duration metric: took 13.905246ms WaitForService to wait for kubelet
	I1212 21:39:45.931699  577249 kubeadm.go:587] duration metric: took 5.305047597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:39:45.931734  577249 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:39:45.935039  577249 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:39:45.935078  577249 node_conditions.go:123] node cpu capacity is 2
	I1212 21:39:45.935092  577249 node_conditions.go:105] duration metric: took 3.341621ms to run NodePressure ...
	I1212 21:39:45.935104  577249 start.go:242] waiting for startup goroutines ...
	I1212 21:39:45.935112  577249 start.go:247] waiting for cluster config update ...
	I1212 21:39:45.935121  577249 start.go:256] writing updated cluster config ...
	I1212 21:39:45.935436  577249 ssh_runner.go:195] Run: rm -f paused
	I1212 21:39:45.939242  577249 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:39:45.939868  577249 kapi.go:59] client config for pause-634913: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:39:45.943014  577249 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ckvjv" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:39:47.950261  577249 pod_ready.go:104] pod "coredns-66bc5c9577-ckvjv" is not "Ready", error: <nil>
	W1212 21:39:50.448316  577249 pod_ready.go:104] pod "coredns-66bc5c9577-ckvjv" is not "Ready", error: <nil>
	I1212 21:39:52.448884  577249 pod_ready.go:94] pod "coredns-66bc5c9577-ckvjv" is "Ready"
	I1212 21:39:52.448954  577249 pod_ready.go:86] duration metric: took 6.505912418s for pod "coredns-66bc5c9577-ckvjv" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:52.451812  577249 pod_ready.go:83] waiting for pod "etcd-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:39:54.458593  577249 pod_ready.go:104] pod "etcd-pause-634913" is not "Ready", error: <nil>
	I1212 21:39:54.957695  577249 pod_ready.go:94] pod "etcd-pause-634913" is "Ready"
	I1212 21:39:54.957725  577249 pod_ready.go:86] duration metric: took 2.505845699s for pod "etcd-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:54.960285  577249 pod_ready.go:83] waiting for pod "kube-apiserver-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:54.965495  577249 pod_ready.go:94] pod "kube-apiserver-pause-634913" is "Ready"
	I1212 21:39:54.965525  577249 pod_ready.go:86] duration metric: took 5.217382ms for pod "kube-apiserver-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:54.968137  577249 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:39:56.973907  577249 pod_ready.go:104] pod "kube-controller-manager-pause-634913" is not "Ready", error: <nil>
	I1212 21:39:57.973405  577249 pod_ready.go:94] pod "kube-controller-manager-pause-634913" is "Ready"
	I1212 21:39:57.973438  577249 pod_ready.go:86] duration metric: took 3.005273207s for pod "kube-controller-manager-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:57.975628  577249 pod_ready.go:83] waiting for pod "kube-proxy-6qbl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:57.979798  577249 pod_ready.go:94] pod "kube-proxy-6qbl7" is "Ready"
	I1212 21:39:57.979825  577249 pod_ready.go:86] duration metric: took 4.173449ms for pod "kube-proxy-6qbl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:57.982138  577249 pod_ready.go:83] waiting for pod "kube-scheduler-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:58.246254  577249 pod_ready.go:94] pod "kube-scheduler-pause-634913" is "Ready"
	I1212 21:39:58.246283  577249 pod_ready.go:86] duration metric: took 264.112835ms for pod "kube-scheduler-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:58.246295  577249 pod_ready.go:40] duration metric: took 12.307018959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:39:58.309078  577249 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 21:39:58.312204  577249 out.go:179] * Done! kubectl is now configured to use "pause-634913" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.219230075Z" level=info msg="Started container" PID=2352 containerID=089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a description=kube-system/coredns-66bc5c9577-ckvjv/coredns id=7d0d5a1b-3fb7-40f7-9a05-3a22a82d0e74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9839e4e3f702936735c548bd010c62ec93141bf000b15243eb34f1d3d1f37d21
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.225667735Z" level=info msg="Created container 9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9: kube-system/kube-controller-manager-pause-634913/kube-controller-manager" id=947d439d-b317-494a-94ea-557988d04d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.228760509Z" level=info msg="Starting container: 9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9" id=a1d055c2-8b1e-42d4-b776-bb6f24d6054f name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.23480797Z" level=info msg="Started container" PID=2357 containerID=9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9 description=kube-system/kube-controller-manager-pause-634913/kube-controller-manager id=a1d055c2-8b1e-42d4-b776-bb6f24d6054f name=/runtime.v1.RuntimeService/StartContainer sandboxID=781e9bf80ff136de372c6236a8fa865b7853ae9cb12e146c1c207c9d15d3e7ad
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.336676059Z" level=info msg="Created container 04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49: kube-system/kube-scheduler-pause-634913/kube-scheduler" id=8f722b30-8157-428e-aeb2-8959a0a9429c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.339501458Z" level=info msg="Starting container: 04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49" id=8c23984d-1ef4-4f65-bc33-9f04a4aece0e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.341938668Z" level=info msg="Started container" PID=2362 containerID=04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49 description=kube-system/kube-scheduler-pause-634913/kube-scheduler id=8c23984d-1ef4-4f65-bc33-9f04a4aece0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f07a83bea212cec7d37bae1fe0cfdcb12ca31e196d92c6e888c6a69d84697ba8
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.498804432Z" level=info msg="Created container dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39: kube-system/kube-proxy-6qbl7/kube-proxy" id=2f473b73-c880-420f-bfee-a858f2e35bbb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.503390824Z" level=info msg="Starting container: dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39" id=6a2ca16b-6c65-4605-9a23-046cd12627ec name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.51307216Z" level=info msg="Started container" PID=2346 containerID=dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39 description=kube-system/kube-proxy-6qbl7/kube-proxy id=6a2ca16b-6c65-4605-9a23-046cd12627ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=5df2cfa2a164b5fbd9a8fbd8005c71afe227f36356516bd6c4107bcee9235dc9
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.475706845Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.478996084Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.479032244Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.479054874Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.482237101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.482274557Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.482297442Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.4853634Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.485396229Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.485418966Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.488228807Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.488261005Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.488282674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.491461644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.491494202Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	04177a1e4771a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   21 seconds ago       Running             kube-scheduler            1                   f07a83bea212c       kube-scheduler-pause-634913            kube-system
	9a597043c3e90       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   21 seconds ago       Running             kube-controller-manager   1                   781e9bf80ff13       kube-controller-manager-pause-634913   kube-system
	089141a6f9a07       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   21 seconds ago       Running             coredns                   1                   9839e4e3f7029       coredns-66bc5c9577-ckvjv               kube-system
	dfe8faafc64ee       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   21 seconds ago       Running             kube-proxy                1                   5df2cfa2a164b       kube-proxy-6qbl7                       kube-system
	a1104339af170       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   21 seconds ago       Running             kindnet-cni               1                   4065e64f9fdaa       kindnet-klcm9                          kube-system
	15f89e22e178c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   21 seconds ago       Running             etcd                      1                   74273bab88c97       etcd-pause-634913                      kube-system
	9058d5f4ba613       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   21 seconds ago       Running             kube-apiserver            1                   cfcb358fb8efb       kube-apiserver-pause-634913            kube-system
	af50e0432616e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   33 seconds ago       Exited              coredns                   0                   9839e4e3f7029       coredns-66bc5c9577-ckvjv               kube-system
	65c4628f15cc9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   4065e64f9fdaa       kindnet-klcm9                          kube-system
	a4e451f1e032b       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   5df2cfa2a164b       kube-proxy-6qbl7                       kube-system
	d0fab9b020f0d       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   781e9bf80ff13       kube-controller-manager-pause-634913   kube-system
	28b7c58c880a7       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   cfcb358fb8efb       kube-apiserver-pause-634913            kube-system
	76cd552679907       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   f07a83bea212c       kube-scheduler-pause-634913            kube-system
	524fbadbdc48e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   74273bab88c97       etcd-pause-634913                      kube-system
	
	
	==> coredns [089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54927 - 35359 "HINFO IN 4574428601002878109.8787081905462397161. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022515276s
	
	
	==> coredns [af50e0432616e5578e5a23191899d072d7fe0365c765d16d22ac3d347d2d9094] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48947 - 46692 "HINFO IN 5867580270348932935.4404528639309647819. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023784089s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-634913
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-634913
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=pause-634913
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T21_38_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 21:38:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-634913
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:39:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:38:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:38:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:38:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:39:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-634913
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                df27309a-63dc-4ade-947a-7ed260135648
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ckvjv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     76s
	  kube-system                 etcd-pause-634913                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         81s
	  kube-system                 kindnet-klcm9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-pause-634913             250m (12%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-pause-634913    200m (10%)    0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-proxy-6qbl7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-pause-634913             100m (5%)     0 (0%)      0 (0%)           0 (0%)         81s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 74s                kube-proxy       
	  Normal   Starting                 16s                kube-proxy       
	  Normal   NodeHasSufficientPID     88s (x8 over 88s)  kubelet          Node pause-634913 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 88s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node pause-634913 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node pause-634913 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 88s                kubelet          Starting kubelet.
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s                kubelet          Node pause-634913 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s                kubelet          Node pause-634913 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s                kubelet          Node pause-634913 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           77s                node-controller  Node pause-634913 event: Registered Node pause-634913 in Controller
	  Normal   NodeReady                33s                kubelet          Node pause-634913 status is now: NodeReady
	  Normal   RegisteredNode           13s                node-controller  Node pause-634913 event: Registered Node pause-634913 in Controller
	
	
	==> dmesg <==
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	[Dec12 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.613273] overlayfs: idmapped layers are currently not supported
	[Dec12 21:06] overlayfs: idmapped layers are currently not supported
	[Dec12 21:07] overlayfs: idmapped layers are currently not supported
	[ +26.617506] overlayfs: idmapped layers are currently not supported
	[Dec12 21:09] overlayfs: idmapped layers are currently not supported
	[Dec12 21:13] overlayfs: idmapped layers are currently not supported
	[Dec12 21:14] overlayfs: idmapped layers are currently not supported
	[Dec12 21:15] overlayfs: idmapped layers are currently not supported
	[Dec12 21:16] overlayfs: idmapped layers are currently not supported
	[Dec12 21:17] overlayfs: idmapped layers are currently not supported
	[Dec12 21:19] overlayfs: idmapped layers are currently not supported
	[ +26.409125] overlayfs: idmapped layers are currently not supported
	[Dec12 21:20] overlayfs: idmapped layers are currently not supported
	[ +45.357391] overlayfs: idmapped layers are currently not supported
	[Dec12 21:21] overlayfs: idmapped layers are currently not supported
	[ +55.414462] overlayfs: idmapped layers are currently not supported
	[Dec12 21:22] overlayfs: idmapped layers are currently not supported
	[Dec12 21:23] overlayfs: idmapped layers are currently not supported
	[Dec12 21:24] overlayfs: idmapped layers are currently not supported
	[Dec12 21:26] overlayfs: idmapped layers are currently not supported
	[Dec12 21:27] overlayfs: idmapped layers are currently not supported
	[Dec12 21:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [15f89e22e178c3af539227dee8fefc1f4883f34271b0332253a4f08509f0c879] <==
	{"level":"warn","ts":"2025-12-12T21:39:43.029713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.057874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.077581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.110483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.119913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.143627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.164052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.179037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.194574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.254005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.251370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.305767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.311857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.329237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.348127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.369499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.384661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.402050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.423537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.437738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.463764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.494198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.509293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.531969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.677770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42136","server-name":"","error":"EOF"}
	
	
	==> etcd [524fbadbdc48e9b2e13c860539bd61a4c7b22c37117dc771d7b59db12046b288] <==
	{"level":"warn","ts":"2025-12-12T21:38:36.572114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.594247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.631762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.690539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.708752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.724080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.873995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44992","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T21:39:32.209268Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T21:39:32.209334Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-634913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-12T21:39:32.209432Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T21:39:32.352801Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T21:39:32.352893Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.352949Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-12T21:39:32.353059Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353078Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353138Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T21:39:32.353148Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.353121Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353205Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353219Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T21:39:32.353226Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.354915Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-12T21:39:32.355008Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.355090Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-12T21:39:32.355124Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-634913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 21:40:01 up  4:22,  0 user,  load average: 2.49, 1.62, 1.56
	Linux pause-634913 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65c4628f15cc95be90c08bf36a69f6cb1b76eee884867060f1e71c3247881865] <==
	I1212 21:38:47.423927       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 21:38:47.512690       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1212 21:38:47.512836       1 main.go:148] setting mtu 1500 for CNI 
	I1212 21:38:47.512854       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 21:38:47.512865       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T21:38:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 21:38:47.622817       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 21:38:47.712501       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 21:38:47.712601       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 21:38:47.713519       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1212 21:39:17.623006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 21:39:17.713705       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1212 21:39:17.713705       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 21:39:17.713917       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1212 21:39:19.013394       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 21:39:19.013428       1 metrics.go:72] Registering metrics
	I1212 21:39:19.013489       1 controller.go:711] "Syncing nftables rules"
	I1212 21:39:27.628597       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 21:39:27.628658       1 main.go:301] handling current node
	
	
	==> kindnet [a1104339af1705164883992605c6d239c40cc3820c8e583be873847c54a5fdaf] <==
	I1212 21:39:40.308602       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 21:39:40.308840       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1212 21:39:40.308967       1 main.go:148] setting mtu 1500 for CNI 
	I1212 21:39:40.308978       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 21:39:40.308992       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T21:39:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 21:39:40.486837       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 21:39:40.486869       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 21:39:40.486879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 21:39:40.487217       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 21:39:44.888541       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 21:39:44.888767       1 metrics.go:72] Registering metrics
	I1212 21:39:44.888862       1 controller.go:711] "Syncing nftables rules"
	I1212 21:39:50.475362       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 21:39:50.475417       1 main.go:301] handling current node
	I1212 21:40:00.484730       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 21:40:00.484820       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28b7c58c880a7ef000a9e96201895fc8fa32c3ff7f5ec9ac86b3ccc8f337cb63] <==
	W1212 21:39:32.232935       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.232983       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1212 21:39:32.233105       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W1212 21:39:32.234807       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234886       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234886       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234936       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234954       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234986       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235007       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235042       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235073       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235092       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235135       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235154       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235193       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235204       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235245       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235261       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235296       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235322       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235371       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235410       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235424       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9058d5f4ba61315fb9ccc3747fdcbdafa77a1fc71db2d8f2d8363f05cef61c0a] <==
	I1212 21:39:44.826268       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 21:39:44.832183       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 21:39:44.832275       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 21:39:44.833461       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 21:39:44.833605       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 21:39:44.833706       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 21:39:44.833786       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 21:39:44.835212       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 21:39:44.835294       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 21:39:44.835358       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 21:39:44.843324       1 aggregator.go:171] initial CRD sync complete...
	I1212 21:39:44.843415       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 21:39:44.843445       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 21:39:44.844122       1 cache.go:39] Caches are synced for autoregister controller
	I1212 21:39:44.894980       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 21:39:44.912506       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:39:44.912606       1 policy_source.go:240] refreshing policies
	E1212 21:39:44.912923       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 21:39:44.917100       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 21:39:45.434590       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 21:39:46.703761       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 21:39:48.103825       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 21:39:48.202807       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 21:39:48.397303       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 21:39:48.501209       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9] <==
	I1212 21:39:48.139038       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 21:39:48.139139       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 21:39:48.139725       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 21:39:48.139821       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 21:39:48.141461       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 21:39:48.141519       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1212 21:39:48.141502       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 21:39:48.141692       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 21:39:48.152114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:39:48.154479       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:39:48.173125       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 21:39:48.180416       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 21:39:48.188921       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 21:39:48.197788       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 21:39:48.197986       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 21:39:48.198513       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-634913"
	I1212 21:39:48.198612       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 21:39:48.199117       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:39:48.199385       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 21:39:48.199394       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 21:39:48.204083       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 21:39:48.204312       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 21:39:48.204494       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 21:39:48.204530       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 21:39:48.204538       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-controller-manager [d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4] <==
	I1212 21:38:44.741906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 21:38:44.741982       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 21:38:44.743159       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 21:38:44.745528       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:38:44.746647       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 21:38:44.772424       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:38:44.785601       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 21:38:44.785693       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 21:38:44.785703       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 21:38:44.785787       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-634913"
	I1212 21:38:44.785614       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 21:38:44.785869       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:38:44.785950       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 21:38:44.785978       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 21:38:44.786714       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 21:38:44.786775       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 21:38:44.786957       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 21:38:44.795287       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 21:38:44.795362       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 21:38:44.795387       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 21:38:44.795401       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 21:38:44.795407       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 21:38:44.801226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:38:44.821758       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-634913" podCIDRs=["10.244.0.0/24"]
	I1212 21:39:30.114328       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a4e451f1e032b800e0cc40a399f4d41663317a67b1b0f6595a114ab35837c89a] <==
	I1212 21:38:47.368501       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:38:47.451013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:38:47.551895       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:38:47.552008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1212 21:38:47.552119       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:38:47.575660       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:38:47.575777       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:38:47.579225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:38:47.579611       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:38:47.579832       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:38:47.581563       1 config.go:200] "Starting service config controller"
	I1212 21:38:47.581640       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:38:47.581697       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:38:47.581725       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:38:47.581759       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:38:47.581786       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:38:47.583882       1 config.go:309] "Starting node config controller"
	I1212 21:38:47.584687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:38:47.584751       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:38:47.682386       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:38:47.682390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:38:47.682429       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39] <==
	I1212 21:39:43.714483       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:39:44.308201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:39:44.932683       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:39:44.936433       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1212 21:39:44.941801       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:39:45.177406       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:39:45.177477       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:39:45.182682       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:39:45.183028       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:39:45.183056       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:39:45.184457       1 config.go:200] "Starting service config controller"
	I1212 21:39:45.184484       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:39:45.189808       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:39:45.189935       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:39:45.190007       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:39:45.190048       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:39:45.194857       1 config.go:309] "Starting node config controller"
	I1212 21:39:45.200104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:39:45.200218       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:39:45.284731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:39:45.291491       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:39:45.291684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49] <==
	I1212 21:39:43.188995       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:39:45.582506       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 21:39:45.582638       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:39:45.594180       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1212 21:39:45.594298       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1212 21:39:45.594374       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:45.594439       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:45.594533       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 21:39:45.594566       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 21:39:45.595737       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 21:39:45.595819       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 21:39:45.695345       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 21:39:45.695494       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1212 21:39:45.695606       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe] <==
	E1212 21:38:37.777709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:38:37.777777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:38:38.618014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:38:38.620282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:38:38.662485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 21:38:38.720034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:38:38.789078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:38:38.799802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:38:38.820429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:38:38.835241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:38:38.867657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:38:38.907657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:38:38.937690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:38:39.032160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:38:39.050900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:38:39.113694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:38:39.177366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:38:39.361975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1212 21:38:41.457581       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:32.208799       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 21:39:32.208820       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 21:39:32.208841       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 21:39:32.208868       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:32.209022       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1212 21:39:32.209039       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.020584    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qbl7\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="730f0fa7-551b-4674-ab46-dafb588f985c" pod="kube-system/kube-proxy-6qbl7"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.020875    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-ckvjv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97f3d46a-98b9-449a-b0fa-f44cf663939d" pod="kube-system/coredns-66bc5c9577-ckvjv"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: I1212 21:39:40.052213    1323 scope.go:117] "RemoveContainer" containerID="d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.052959    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qbl7\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="730f0fa7-551b-4674-ab46-dafb588f985c" pod="kube-system/kube-proxy-6qbl7"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053185    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-ckvjv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97f3d46a-98b9-449a-b0fa-f44cf663939d" pod="kube-system/coredns-66bc5c9577-ckvjv"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053451    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5762c8cd1a27671205a64fe2b09ac7f5" pod="kube-system/kube-scheduler-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053751    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="550018b5380a5824cf7104c0b1f6f137" pod="kube-system/etcd-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053991    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d8da9ad8465ec53bb8ac738d2d9e8ac9" pod="kube-system/kube-apiserver-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.054316    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f4d058c83c6449560b36cdcc47554d73" pod="kube-system/kube-controller-manager-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.054651    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-klcm9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="81d3d855-8a49-4ca3-af27-1694f53c05c6" pod="kube-system/kindnet-klcm9"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: I1212 21:39:40.087876    1323 scope.go:117] "RemoveContainer" containerID="76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.621554    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="f4d058c83c6449560b36cdcc47554d73" pod="kube-system/kube-controller-manager-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.622186    1323 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-634913\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.622266    1323 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-634913\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.677852    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-klcm9\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="81d3d855-8a49-4ca3-af27-1694f53c05c6" pod="kube-system/kindnet-klcm9"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.778122    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-6qbl7\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="730f0fa7-551b-4674-ab46-dafb588f985c" pod="kube-system/kube-proxy-6qbl7"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.796622    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-ckvjv\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="97f3d46a-98b9-449a-b0fa-f44cf663939d" pod="kube-system/coredns-66bc5c9577-ckvjv"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.802214    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="5762c8cd1a27671205a64fe2b09ac7f5" pod="kube-system/kube-scheduler-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.810342    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="550018b5380a5824cf7104c0b1f6f137" pod="kube-system/etcd-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.811779    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="d8da9ad8465ec53bb8ac738d2d9e8ac9" pod="kube-system/kube-apiserver-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.829235    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="d8da9ad8465ec53bb8ac738d2d9e8ac9" pod="kube-system/kube-apiserver-pause-634913"
	Dec 12 21:39:50 pause-634913 kubelet[1323]: W1212 21:39:50.958539    1323 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 12 21:39:58 pause-634913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 21:39:58 pause-634913 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 21:39:58 pause-634913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-634913 -n pause-634913
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-634913 -n pause-634913: exit status 2 (371.526964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-634913 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-634913
helpers_test.go:244: (dbg) docker inspect pause-634913:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe",
	        "Created": "2025-12-12T21:38:14.803480018Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 574676,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-12T21:38:14.871244197Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:0901a42c98a66e87d403260397e61f749cbb49f1d901064d699c20aa39a45595",
	        "ResolvConfPath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/hosts",
	        "LogPath": "/var/lib/docker/containers/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe/8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe-json.log",
	        "Name": "/pause-634913",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-634913:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-634913",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8ba8b226fc6194b382841e2e627f3d16aa7e494caa1c8d0a59cd2bcff35b13fe",
	                "LowerDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9-init/diff:/var/lib/docker/overlay2/0d8202b396b94eb39952b94bf6f599ae5dbc7163167ee15ac72e53b237444d6f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/29c8287a09b57266b3b1afc804017b598197356acc695a455aa81f94f514a2f9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-634913",
	                "Source": "/var/lib/docker/volumes/pause-634913/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-634913",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-634913",
	                "name.minikube.sigs.k8s.io": "pause-634913",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d75ae54978d15c8f39fccc5fd8b23ddab16ed57cc8669bc0d0362a5d8dabc5e2",
	            "SandboxKey": "/var/run/docker/netns/d75ae54978d1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33405"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33406"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33409"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33407"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33408"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-634913": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:87:cb:a4:bd:d2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "14883b96389dfc4343f0d238b8addc8114fd0d727d68ca1ef9c8cbda3610474e",
	                    "EndpointID": "16080da94973d2f7f11393e9cf527a167ecfec5081ed61cd4f568800de0d7ee3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-634913",
	                        "8ba8b226fc61"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p pause-634913 -n pause-634913
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p pause-634913 -n pause-634913: exit status 2 (368.770309ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-arm64 -p pause-634913 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-arm64 -p pause-634913 logs -n 25: (1.434055992s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                      ARGS                                                                       │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-406866 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:25 UTC │ 12 Dec 25 21:26 UTC │
	│ start   │ -p missing-upgrade-992322 --memory=3072 --driver=docker  --container-runtime=crio                                                               │ missing-upgrade-992322    │ jenkins │ v1.35.0 │ 12 Dec 25 21:25 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ delete  │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ start   │ -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                           │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:26 UTC │
	│ ssh     │ -p NoKubernetes-406866 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │                     │
	│ stop    │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:26 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p NoKubernetes-406866 --driver=docker  --container-runtime=crio                                                                                │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p missing-upgrade-992322 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ missing-upgrade-992322    │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ ssh     │ -p NoKubernetes-406866 sudo systemctl is-active --quiet service kubelet                                                                         │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │                     │
	│ delete  │ -p NoKubernetes-406866                                                                                                                          │ NoKubernetes-406866       │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio        │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ delete  │ -p missing-upgrade-992322                                                                                                                       │ missing-upgrade-992322    │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ stop    │ -p kubernetes-upgrade-905307                                                                                                                    │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:27 UTC │
	│ start   │ -p stopped-upgrade-302169 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ stopped-upgrade-302169    │ jenkins │ v1.35.0 │ 12 Dec 25 21:27 UTC │ 12 Dec 25 21:28 UTC │
	│ start   │ -p kubernetes-upgrade-905307 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-905307 │ jenkins │ v1.37.0 │ 12 Dec 25 21:27 UTC │                     │
	│ stop    │ stopped-upgrade-302169 stop                                                                                                                     │ stopped-upgrade-302169    │ jenkins │ v1.35.0 │ 12 Dec 25 21:28 UTC │ 12 Dec 25 21:28 UTC │
	│ start   │ -p stopped-upgrade-302169 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ stopped-upgrade-302169    │ jenkins │ v1.37.0 │ 12 Dec 25 21:28 UTC │ 12 Dec 25 21:32 UTC │
	│ delete  │ -p stopped-upgrade-302169                                                                                                                       │ stopped-upgrade-302169    │ jenkins │ v1.37.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:33 UTC │
	│ start   │ -p running-upgrade-649209 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                            │ running-upgrade-649209    │ jenkins │ v1.35.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:33 UTC │
	│ start   │ -p running-upgrade-649209 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                        │ running-upgrade-649209    │ jenkins │ v1.37.0 │ 12 Dec 25 21:33 UTC │ 12 Dec 25 21:38 UTC │
	│ delete  │ -p running-upgrade-649209                                                                                                                       │ running-upgrade-649209    │ jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:38 UTC │
	│ start   │ -p pause-634913 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                       │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:38 UTC │ 12 Dec 25 21:39 UTC │
	│ start   │ -p pause-634913 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │ 12 Dec 25 21:39 UTC │
	│ pause   │ -p pause-634913 --alsologtostderr -v=5                                                                                                          │ pause-634913              │ jenkins │ v1.37.0 │ 12 Dec 25 21:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 21:39:30
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 21:39:30.809484  577249 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:39:30.809600  577249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:39:30.809612  577249 out.go:374] Setting ErrFile to fd 2...
	I1212 21:39:30.809617  577249 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:39:30.809874  577249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:39:30.810268  577249 out.go:368] Setting JSON to false
	I1212 21:39:30.811282  577249 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":15723,"bootTime":1765559848,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 21:39:30.811355  577249 start.go:143] virtualization:  
	I1212 21:39:30.813376  577249 out.go:179] * [pause-634913] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 21:39:30.814911  577249 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 21:39:30.814970  577249 notify.go:221] Checking for updates...
	I1212 21:39:30.818583  577249 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 21:39:30.820824  577249 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:39:30.822101  577249 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 21:39:30.823274  577249 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 21:39:30.824501  577249 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 21:39:30.826205  577249 config.go:182] Loaded profile config "pause-634913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:39:30.826765  577249 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 21:39:30.867058  577249 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 21:39:30.867184  577249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:39:30.929343  577249 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:39:30.919955353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:39:30.929461  577249 docker.go:319] overlay module found
	I1212 21:39:30.930849  577249 out.go:179] * Using the docker driver based on existing profile
	I1212 21:39:30.932043  577249 start.go:309] selected driver: docker
	I1212 21:39:30.932065  577249 start.go:927] validating driver "docker" against &{Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:39:30.932212  577249 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 21:39:30.932357  577249 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:39:30.989228  577249 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:39:30.979985251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:39:30.989673  577249 cni.go:84] Creating CNI manager for ""
	I1212 21:39:30.989745  577249 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 21:39:30.989800  577249 start.go:353] cluster config:
	{Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:39:30.991252  577249 out.go:179] * Starting "pause-634913" primary control-plane node in "pause-634913" cluster
	I1212 21:39:30.992362  577249 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 21:39:30.993789  577249 out.go:179] * Pulling base image v0.0.48-1765505794-22112 ...
	I1212 21:39:30.994953  577249 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:39:30.995019  577249 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 21:39:30.995032  577249 cache.go:65] Caching tarball of preloaded images
	I1212 21:39:30.995028  577249 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 21:39:30.995117  577249 preload.go:238] Found /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1212 21:39:30.995128  577249 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on crio
	I1212 21:39:30.995262  577249 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/config.json ...
	I1212 21:39:31.017959  577249 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon, skipping pull
	I1212 21:39:31.017987  577249 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in daemon, skipping load
	I1212 21:39:31.018004  577249 cache.go:243] Successfully downloaded all kic artifacts
	I1212 21:39:31.018038  577249 start.go:360] acquireMachinesLock for pause-634913: {Name:mk73b6f645c53f163db55925e2dc12b1ddc178e2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1212 21:39:31.018110  577249 start.go:364] duration metric: took 49.255µs to acquireMachinesLock for "pause-634913"
	I1212 21:39:31.018136  577249 start.go:96] Skipping create...Using existing machine configuration
	I1212 21:39:31.018146  577249 fix.go:54] fixHost starting: 
	I1212 21:39:31.018435  577249 cli_runner.go:164] Run: docker container inspect pause-634913 --format={{.State.Status}}
	I1212 21:39:31.036463  577249 fix.go:112] recreateIfNeeded on pause-634913: state=Running err=<nil>
	W1212 21:39:31.036493  577249 fix.go:138] unexpected machine state, will restart: <nil>
	I1212 21:39:31.037981  577249 out.go:252] * Updating the running docker "pause-634913" container ...
	I1212 21:39:31.038008  577249 machine.go:94] provisionDockerMachine start ...
	I1212 21:39:31.038088  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.056656  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:31.056991  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:31.057007  577249 main.go:143] libmachine: About to run SSH command:
	hostname
	I1212 21:39:31.213459  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-634913
	
	I1212 21:39:31.213495  577249 ubuntu.go:182] provisioning hostname "pause-634913"
	I1212 21:39:31.213562  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.234125  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:31.234444  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:31.234456  577249 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-634913 && echo "pause-634913" | sudo tee /etc/hostname
	I1212 21:39:31.403462  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-634913
	
	I1212 21:39:31.403600  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.421380  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:31.421732  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:31.421808  577249 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-634913' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-634913/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-634913' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1212 21:39:31.577338  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1212 21:39:31.577420  577249 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22112-362983/.minikube CaCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22112-362983/.minikube}
	I1212 21:39:31.577454  577249 ubuntu.go:190] setting up certificates
	I1212 21:39:31.577499  577249 provision.go:84] configureAuth start
	I1212 21:39:31.577592  577249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-634913
	I1212 21:39:31.597918  577249 provision.go:143] copyHostCerts
	I1212 21:39:31.598010  577249 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem, removing ...
	I1212 21:39:31.598026  577249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem
	I1212 21:39:31.598108  577249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/ca.pem (1082 bytes)
	I1212 21:39:31.598216  577249 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem, removing ...
	I1212 21:39:31.598228  577249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem
	I1212 21:39:31.598255  577249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/cert.pem (1123 bytes)
	I1212 21:39:31.598318  577249 exec_runner.go:144] found /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem, removing ...
	I1212 21:39:31.598326  577249 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem
	I1212 21:39:31.598352  577249 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22112-362983/.minikube/key.pem (1679 bytes)
	I1212 21:39:31.598411  577249 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem org=jenkins.pause-634913 san=[127.0.0.1 192.168.85.2 localhost minikube pause-634913]
	I1212 21:39:31.818076  577249 provision.go:177] copyRemoteCerts
	I1212 21:39:31.818148  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1212 21:39:31.818196  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:31.835957  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:31.944677  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1212 21:39:31.963823  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1212 21:39:31.982352  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1212 21:39:32.003325  577249 provision.go:87] duration metric: took 425.788858ms to configureAuth
	I1212 21:39:32.003355  577249 ubuntu.go:206] setting minikube options for container-runtime
	I1212 21:39:32.003628  577249 config.go:182] Loaded profile config "pause-634913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:39:32.003757  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:32.026138  577249 main.go:143] libmachine: Using SSH client type: native
	I1212 21:39:32.026523  577249 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3db9e0] 0x3ddee0 <nil>  [] 0s} 127.0.0.1 33405 <nil> <nil>}
	I1212 21:39:32.026542  577249 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1212 21:39:37.399765  577249 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1212 21:39:37.399791  577249 machine.go:97] duration metric: took 6.361774093s to provisionDockerMachine
	I1212 21:39:37.399804  577249 start.go:293] postStartSetup for "pause-634913" (driver="docker")
	I1212 21:39:37.399814  577249 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1212 21:39:37.399890  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1212 21:39:37.399940  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.416513  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.524735  577249 ssh_runner.go:195] Run: cat /etc/os-release
	I1212 21:39:37.528256  577249 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1212 21:39:37.528291  577249 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1212 21:39:37.528303  577249 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/addons for local assets ...
	I1212 21:39:37.528360  577249 filesync.go:126] Scanning /home/jenkins/minikube-integration/22112-362983/.minikube/files for local assets ...
	I1212 21:39:37.528481  577249 filesync.go:149] local asset: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem -> 3648532.pem in /etc/ssl/certs
	I1212 21:39:37.528600  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1212 21:39:37.536306  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:39:37.554802  577249 start.go:296] duration metric: took 154.981537ms for postStartSetup
	I1212 21:39:37.554935  577249 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:39:37.554995  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.572691  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.677773  577249 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1212 21:39:37.683008  577249 fix.go:56] duration metric: took 6.664854128s for fixHost
	I1212 21:39:37.683036  577249 start.go:83] releasing machines lock for "pause-634913", held for 6.664911638s
	I1212 21:39:37.683105  577249 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-634913
	I1212 21:39:37.700075  577249 ssh_runner.go:195] Run: cat /version.json
	I1212 21:39:37.700134  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.700151  577249 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1212 21:39:37.700227  577249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-634913
	I1212 21:39:37.718872  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.722554  577249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33405 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/pause-634913/id_rsa Username:docker}
	I1212 21:39:37.913578  577249 ssh_runner.go:195] Run: systemctl --version
	I1212 21:39:37.920181  577249 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1212 21:39:37.966987  577249 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1212 21:39:37.973947  577249 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1212 21:39:37.974067  577249 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1212 21:39:37.984236  577249 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1212 21:39:37.984312  577249 start.go:496] detecting cgroup driver to use...
	I1212 21:39:37.984361  577249 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1212 21:39:37.984491  577249 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1212 21:39:38.001920  577249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1212 21:39:38.020764  577249 docker.go:218] disabling cri-docker service (if available) ...
	I1212 21:39:38.020895  577249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1212 21:39:38.039677  577249 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1212 21:39:38.054885  577249 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1212 21:39:38.194605  577249 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1212 21:39:38.325882  577249 docker.go:234] disabling docker service ...
	I1212 21:39:38.325945  577249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1212 21:39:38.341104  577249 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1212 21:39:38.354783  577249 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1212 21:39:38.482076  577249 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1212 21:39:38.620179  577249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1212 21:39:38.633328  577249 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1212 21:39:38.648551  577249 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1212 21:39:38.648628  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.657646  577249 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1212 21:39:38.657797  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.666883  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.676668  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.685748  577249 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1212 21:39:38.694679  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.706301  577249 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.715500  577249 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1212 21:39:38.724762  577249 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1212 21:39:38.733383  577249 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1212 21:39:38.742022  577249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:39:38.886277  577249 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1212 21:39:39.090096  577249 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1212 21:39:39.090218  577249 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1212 21:39:39.094552  577249 start.go:564] Will wait 60s for crictl version
	I1212 21:39:39.094662  577249 ssh_runner.go:195] Run: which crictl
	I1212 21:39:39.098546  577249 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1212 21:39:39.124659  577249 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1212 21:39:39.124819  577249 ssh_runner.go:195] Run: crio --version
	I1212 21:39:39.153670  577249 ssh_runner.go:195] Run: crio --version
	I1212 21:39:39.184591  577249 out.go:179] * Preparing Kubernetes v1.34.2 on CRI-O 1.34.3 ...
	I1212 21:39:39.185907  577249 cli_runner.go:164] Run: docker network inspect pause-634913 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1212 21:39:39.202698  577249 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1212 21:39:39.206986  577249 kubeadm.go:884] updating cluster {Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1212 21:39:39.207144  577249 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 21:39:39.207203  577249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:39:39.243951  577249 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:39:39.243977  577249 crio.go:433] Images already preloaded, skipping extraction
	I1212 21:39:39.244031  577249 ssh_runner.go:195] Run: sudo crictl images --output json
	I1212 21:39:39.269474  577249 crio.go:514] all images are preloaded for cri-o runtime.
	I1212 21:39:39.269497  577249 cache_images.go:86] Images are preloaded, skipping loading
	I1212 21:39:39.269504  577249 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.2 crio true true} ...
	I1212 21:39:39.269649  577249 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-634913 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1212 21:39:39.269730  577249 ssh_runner.go:195] Run: crio config
	I1212 21:39:39.322333  577249 cni.go:84] Creating CNI manager for ""
	I1212 21:39:39.322359  577249 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 21:39:39.322379  577249 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1212 21:39:39.322439  577249 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-634913 NodeName:pause-634913 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1212 21:39:39.322582  577249 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-634913"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1212 21:39:39.322664  577249 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.2
	I1212 21:39:39.330438  577249 binaries.go:51] Found k8s binaries, skipping transfer
	I1212 21:39:39.330505  577249 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1212 21:39:39.338001  577249 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1212 21:39:39.351028  577249 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1212 21:39:39.364462  577249 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1212 21:39:39.377177  577249 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1212 21:39:39.380993  577249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:39:39.526425  577249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:39:39.540091  577249 certs.go:69] Setting up /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913 for IP: 192.168.85.2
	I1212 21:39:39.540113  577249 certs.go:195] generating shared ca certs ...
	I1212 21:39:39.540129  577249 certs.go:227] acquiring lock for ca certs: {Name:mke6545c4e304bbe114592c579854965984df8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:39:39.540341  577249 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key
	I1212 21:39:39.540810  577249 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key
	I1212 21:39:39.540853  577249 certs.go:257] generating profile certs ...
	I1212 21:39:39.540979  577249 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.key
	I1212 21:39:39.541099  577249 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/apiserver.key.9f95ce7c
	I1212 21:39:39.541172  577249 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/proxy-client.key
	I1212 21:39:39.541305  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem (1338 bytes)
	W1212 21:39:39.541368  577249 certs.go:480] ignoring /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853_empty.pem, impossibly tiny 0 bytes
	I1212 21:39:39.541384  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca-key.pem (1675 bytes)
	I1212 21:39:39.541428  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/ca.pem (1082 bytes)
	I1212 21:39:39.541479  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/cert.pem (1123 bytes)
	I1212 21:39:39.541512  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/certs/key.pem (1679 bytes)
	I1212 21:39:39.541581  577249 certs.go:484] found cert: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem (1708 bytes)
	I1212 21:39:39.542376  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1212 21:39:39.565016  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1212 21:39:39.582649  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1212 21:39:39.601049  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1212 21:39:39.619156  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1212 21:39:39.637086  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1212 21:39:39.654930  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1212 21:39:39.673442  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1212 21:39:39.691064  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/ssl/certs/3648532.pem --> /usr/share/ca-certificates/3648532.pem (1708 bytes)
	I1212 21:39:39.708690  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1212 21:39:39.726449  577249 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22112-362983/.minikube/certs/364853.pem --> /usr/share/ca-certificates/364853.pem (1338 bytes)
	I1212 21:39:39.744154  577249 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1212 21:39:39.757437  577249 ssh_runner.go:195] Run: openssl version
	I1212 21:39:39.763744  577249 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.771346  577249 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3648532.pem /etc/ssl/certs/3648532.pem
	I1212 21:39:39.780409  577249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.784308  577249 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 12 20:20 /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.784402  577249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3648532.pem
	I1212 21:39:39.825147  577249 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1212 21:39:39.832845  577249 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.840416  577249 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1212 21:39:39.847939  577249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.851682  577249 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 12 20:10 /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.851767  577249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1212 21:39:39.894187  577249 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1212 21:39:39.902902  577249 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.911162  577249 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/364853.pem /etc/ssl/certs/364853.pem
	I1212 21:39:39.920751  577249 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.933391  577249 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 12 20:20 /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.933483  577249 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/364853.pem
	I1212 21:39:39.994405  577249 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1212 21:39:40.013547  577249 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1212 21:39:40.031818  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1212 21:39:40.124058  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1212 21:39:40.253293  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1212 21:39:40.354293  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1212 21:39:40.408639  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1212 21:39:40.461073  577249 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1212 21:39:40.533290  577249 kubeadm.go:401] StartCluster: {Name:pause-634913 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:pause-634913 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 21:39:40.533428  577249 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1212 21:39:40.533500  577249 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1212 21:39:40.566630  577249 cri.go:89] found id: "04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49"
	I1212 21:39:40.566670  577249 cri.go:89] found id: "9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9"
	I1212 21:39:40.566675  577249 cri.go:89] found id: "089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a"
	I1212 21:39:40.566679  577249 cri.go:89] found id: "dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39"
	I1212 21:39:40.566682  577249 cri.go:89] found id: "a1104339af1705164883992605c6d239c40cc3820c8e583be873847c54a5fdaf"
	I1212 21:39:40.566689  577249 cri.go:89] found id: "15f89e22e178c3af539227dee8fefc1f4883f34271b0332253a4f08509f0c879"
	I1212 21:39:40.566695  577249 cri.go:89] found id: "9058d5f4ba61315fb9ccc3747fdcbdafa77a1fc71db2d8f2d8363f05cef61c0a"
	I1212 21:39:40.566698  577249 cri.go:89] found id: "af50e0432616e5578e5a23191899d072d7fe0365c765d16d22ac3d347d2d9094"
	I1212 21:39:40.566701  577249 cri.go:89] found id: "65c4628f15cc95be90c08bf36a69f6cb1b76eee884867060f1e71c3247881865"
	I1212 21:39:40.566709  577249 cri.go:89] found id: "a4e451f1e032b800e0cc40a399f4d41663317a67b1b0f6595a114ab35837c89a"
	I1212 21:39:40.566715  577249 cri.go:89] found id: "d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4"
	I1212 21:39:40.566718  577249 cri.go:89] found id: "28b7c58c880a7ef000a9e96201895fc8fa32c3ff7f5ec9ac86b3ccc8f337cb63"
	I1212 21:39:40.566736  577249 cri.go:89] found id: "76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe"
	I1212 21:39:40.566741  577249 cri.go:89] found id: "524fbadbdc48e9b2e13c860539bd61a4c7b22c37117dc771d7b59db12046b288"
	I1212 21:39:40.566744  577249 cri.go:89] found id: ""
	I1212 21:39:40.566795  577249 ssh_runner.go:195] Run: sudo runc list -f json
	W1212 21:39:40.584257  577249 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T21:39:40Z" level=error msg="open /run/runc: no such file or directory"
	I1212 21:39:40.584345  577249 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1212 21:39:40.600078  577249 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1212 21:39:40.600180  577249 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1212 21:39:40.600338  577249 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1212 21:39:40.613589  577249 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1212 21:39:40.614312  577249 kubeconfig.go:125] found "pause-634913" server: "https://192.168.85.2:8443"
	I1212 21:39:40.615214  577249 kapi.go:59] client config for pause-634913: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:39:40.615967  577249 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1212 21:39:40.616053  577249 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1212 21:39:40.616071  577249 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1212 21:39:40.616078  577249 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1212 21:39:40.616087  577249 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1212 21:39:40.616508  577249 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1212 21:39:40.625231  577249 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1212 21:39:40.625268  577249 kubeadm.go:602] duration metric: took 25.081439ms to restartPrimaryControlPlane
	I1212 21:39:40.625278  577249 kubeadm.go:403] duration metric: took 91.999151ms to StartCluster
	I1212 21:39:40.625301  577249 settings.go:142] acquiring lock: {Name:mk1bdccb8482fe86d6addb73e1bdc7c41def006f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:39:40.625373  577249 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 21:39:40.626354  577249 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/kubeconfig: {Name:mk0faf1d5081dbb3cb94855e245ed727e59f8124 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 21:39:40.626621  577249 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1212 21:39:40.627018  577249 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1212 21:39:40.627162  577249 config.go:182] Loaded profile config "pause-634913": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:39:40.628626  577249 out.go:179] * Verifying Kubernetes components...
	I1212 21:39:40.628716  577249 out.go:179] * Enabled addons: 
	I1212 21:39:40.630060  577249 addons.go:530] duration metric: took 3.050246ms for enable addons: enabled=[]
	I1212 21:39:40.630105  577249 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1212 21:39:40.905586  577249 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1212 21:39:40.929667  577249 node_ready.go:35] waiting up to 6m0s for node "pause-634913" to be "Ready" ...
	I1212 21:39:44.779470  577249 node_ready.go:49] node "pause-634913" is "Ready"
	I1212 21:39:44.779547  577249 node_ready.go:38] duration metric: took 3.849800878s for node "pause-634913" to be "Ready" ...
	I1212 21:39:44.779576  577249 api_server.go:52] waiting for apiserver process to appear ...
	I1212 21:39:44.779666  577249 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:39:44.801185  577249 api_server.go:72] duration metric: took 4.174528594s to wait for apiserver process to appear ...
	I1212 21:39:44.801223  577249 api_server.go:88] waiting for apiserver healthz status ...
	I1212 21:39:44.801243  577249 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 21:39:44.810977  577249 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1212 21:39:44.811001  577249 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1212 21:39:45.301643  577249 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 21:39:45.310762  577249 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1212 21:39:45.310915  577249 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1212 21:39:45.801440  577249 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1212 21:39:45.811829  577249 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1212 21:39:45.813016  577249 api_server.go:141] control plane version: v1.34.2
	I1212 21:39:45.813057  577249 api_server.go:131] duration metric: took 1.011817524s to wait for apiserver health ...
	I1212 21:39:45.813067  577249 system_pods.go:43] waiting for kube-system pods to appear ...
	I1212 21:39:45.817122  577249 system_pods.go:59] 7 kube-system pods found
	I1212 21:39:45.817164  577249 system_pods.go:61] "coredns-66bc5c9577-ckvjv" [97f3d46a-98b9-449a-b0fa-f44cf663939d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:39:45.817174  577249 system_pods.go:61] "etcd-pause-634913" [b83ea0c9-c3a1-4553-a1da-a480eaf6ef7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:39:45.817180  577249 system_pods.go:61] "kindnet-klcm9" [81d3d855-8a49-4ca3-af27-1694f53c05c6] Running
	I1212 21:39:45.817213  577249 system_pods.go:61] "kube-apiserver-pause-634913" [d21b89ed-4429-4123-8a19-b9e65599bdfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:39:45.817228  577249 system_pods.go:61] "kube-controller-manager-pause-634913" [28cfc299-989a-4f68-bdb9-afe7bfbb8989] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:39:45.817233  577249 system_pods.go:61] "kube-proxy-6qbl7" [730f0fa7-551b-4674-ab46-dafb588f985c] Running
	I1212 21:39:45.817250  577249 system_pods.go:61] "kube-scheduler-pause-634913" [daf26951-3915-4572-9596-b005274b696e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:39:45.817256  577249 system_pods.go:74] duration metric: took 4.18359ms to wait for pod list to return data ...
	I1212 21:39:45.817269  577249 default_sa.go:34] waiting for default service account to be created ...
	I1212 21:39:45.819946  577249 default_sa.go:45] found service account: "default"
	I1212 21:39:45.819973  577249 default_sa.go:55] duration metric: took 2.677294ms for default service account to be created ...
	I1212 21:39:45.819992  577249 system_pods.go:116] waiting for k8s-apps to be running ...
	I1212 21:39:45.917611  577249 system_pods.go:86] 7 kube-system pods found
	I1212 21:39:45.917648  577249 system_pods.go:89] "coredns-66bc5c9577-ckvjv" [97f3d46a-98b9-449a-b0fa-f44cf663939d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1212 21:39:45.917658  577249 system_pods.go:89] "etcd-pause-634913" [b83ea0c9-c3a1-4553-a1da-a480eaf6ef7e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1212 21:39:45.917688  577249 system_pods.go:89] "kindnet-klcm9" [81d3d855-8a49-4ca3-af27-1694f53c05c6] Running
	I1212 21:39:45.917703  577249 system_pods.go:89] "kube-apiserver-pause-634913" [d21b89ed-4429-4123-8a19-b9e65599bdfa] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1212 21:39:45.917711  577249 system_pods.go:89] "kube-controller-manager-pause-634913" [28cfc299-989a-4f68-bdb9-afe7bfbb8989] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1212 21:39:45.917716  577249 system_pods.go:89] "kube-proxy-6qbl7" [730f0fa7-551b-4674-ab46-dafb588f985c] Running
	I1212 21:39:45.917727  577249 system_pods.go:89] "kube-scheduler-pause-634913" [daf26951-3915-4572-9596-b005274b696e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1212 21:39:45.917734  577249 system_pods.go:126] duration metric: took 97.722046ms to wait for k8s-apps to be running ...
	I1212 21:39:45.917747  577249 system_svc.go:44] waiting for kubelet service to be running ....
	I1212 21:39:45.917822  577249 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:39:45.931662  577249 system_svc.go:56] duration metric: took 13.905246ms WaitForService to wait for kubelet
	I1212 21:39:45.931699  577249 kubeadm.go:587] duration metric: took 5.305047597s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1212 21:39:45.931734  577249 node_conditions.go:102] verifying NodePressure condition ...
	I1212 21:39:45.935039  577249 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1212 21:39:45.935078  577249 node_conditions.go:123] node cpu capacity is 2
	I1212 21:39:45.935092  577249 node_conditions.go:105] duration metric: took 3.341621ms to run NodePressure ...
	I1212 21:39:45.935104  577249 start.go:242] waiting for startup goroutines ...
	I1212 21:39:45.935112  577249 start.go:247] waiting for cluster config update ...
	I1212 21:39:45.935121  577249 start.go:256] writing updated cluster config ...
	I1212 21:39:45.935436  577249 ssh_runner.go:195] Run: rm -f paused
	I1212 21:39:45.939242  577249 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:39:45.939868  577249 kapi.go:59] client config for pause-634913: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.crt", KeyFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/profiles/pause-634913/client.key", CAFile:"/home/jenkins/minikube-integration/22112-362983/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1fb4ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1212 21:39:45.943014  577249 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ckvjv" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:39:47.950261  577249 pod_ready.go:104] pod "coredns-66bc5c9577-ckvjv" is not "Ready", error: <nil>
	W1212 21:39:50.448316  577249 pod_ready.go:104] pod "coredns-66bc5c9577-ckvjv" is not "Ready", error: <nil>
	I1212 21:39:52.448884  577249 pod_ready.go:94] pod "coredns-66bc5c9577-ckvjv" is "Ready"
	I1212 21:39:52.448954  577249 pod_ready.go:86] duration metric: took 6.505912418s for pod "coredns-66bc5c9577-ckvjv" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:52.451812  577249 pod_ready.go:83] waiting for pod "etcd-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:39:54.458593  577249 pod_ready.go:104] pod "etcd-pause-634913" is not "Ready", error: <nil>
	I1212 21:39:54.957695  577249 pod_ready.go:94] pod "etcd-pause-634913" is "Ready"
	I1212 21:39:54.957725  577249 pod_ready.go:86] duration metric: took 2.505845699s for pod "etcd-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:54.960285  577249 pod_ready.go:83] waiting for pod "kube-apiserver-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:54.965495  577249 pod_ready.go:94] pod "kube-apiserver-pause-634913" is "Ready"
	I1212 21:39:54.965525  577249 pod_ready.go:86] duration metric: took 5.217382ms for pod "kube-apiserver-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:54.968137  577249 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	W1212 21:39:56.973907  577249 pod_ready.go:104] pod "kube-controller-manager-pause-634913" is not "Ready", error: <nil>
	I1212 21:39:57.973405  577249 pod_ready.go:94] pod "kube-controller-manager-pause-634913" is "Ready"
	I1212 21:39:57.973438  577249 pod_ready.go:86] duration metric: took 3.005273207s for pod "kube-controller-manager-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:57.975628  577249 pod_ready.go:83] waiting for pod "kube-proxy-6qbl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:57.979798  577249 pod_ready.go:94] pod "kube-proxy-6qbl7" is "Ready"
	I1212 21:39:57.979825  577249 pod_ready.go:86] duration metric: took 4.173449ms for pod "kube-proxy-6qbl7" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:57.982138  577249 pod_ready.go:83] waiting for pod "kube-scheduler-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:58.246254  577249 pod_ready.go:94] pod "kube-scheduler-pause-634913" is "Ready"
	I1212 21:39:58.246283  577249 pod_ready.go:86] duration metric: took 264.112835ms for pod "kube-scheduler-pause-634913" in "kube-system" namespace to be "Ready" or be gone ...
	I1212 21:39:58.246295  577249 pod_ready.go:40] duration metric: took 12.307018959s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1212 21:39:58.309078  577249 start.go:625] kubectl: 1.33.2, cluster: 1.34.2 (minor skew: 1)
	I1212 21:39:58.312204  577249 out.go:179] * Done! kubectl is now configured to use "pause-634913" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.219230075Z" level=info msg="Started container" PID=2352 containerID=089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a description=kube-system/coredns-66bc5c9577-ckvjv/coredns id=7d0d5a1b-3fb7-40f7-9a05-3a22a82d0e74 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9839e4e3f702936735c548bd010c62ec93141bf000b15243eb34f1d3d1f37d21
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.225667735Z" level=info msg="Created container 9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9: kube-system/kube-controller-manager-pause-634913/kube-controller-manager" id=947d439d-b317-494a-94ea-557988d04d38 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.228760509Z" level=info msg="Starting container: 9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9" id=a1d055c2-8b1e-42d4-b776-bb6f24d6054f name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.23480797Z" level=info msg="Started container" PID=2357 containerID=9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9 description=kube-system/kube-controller-manager-pause-634913/kube-controller-manager id=a1d055c2-8b1e-42d4-b776-bb6f24d6054f name=/runtime.v1.RuntimeService/StartContainer sandboxID=781e9bf80ff136de372c6236a8fa865b7853ae9cb12e146c1c207c9d15d3e7ad
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.336676059Z" level=info msg="Created container 04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49: kube-system/kube-scheduler-pause-634913/kube-scheduler" id=8f722b30-8157-428e-aeb2-8959a0a9429c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.339501458Z" level=info msg="Starting container: 04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49" id=8c23984d-1ef4-4f65-bc33-9f04a4aece0e name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.341938668Z" level=info msg="Started container" PID=2362 containerID=04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49 description=kube-system/kube-scheduler-pause-634913/kube-scheduler id=8c23984d-1ef4-4f65-bc33-9f04a4aece0e name=/runtime.v1.RuntimeService/StartContainer sandboxID=f07a83bea212cec7d37bae1fe0cfdcb12ca31e196d92c6e888c6a69d84697ba8
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.498804432Z" level=info msg="Created container dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39: kube-system/kube-proxy-6qbl7/kube-proxy" id=2f473b73-c880-420f-bfee-a858f2e35bbb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.503390824Z" level=info msg="Starting container: dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39" id=6a2ca16b-6c65-4605-9a23-046cd12627ec name=/runtime.v1.RuntimeService/StartContainer
	Dec 12 21:39:40 pause-634913 crio[2082]: time="2025-12-12T21:39:40.51307216Z" level=info msg="Started container" PID=2346 containerID=dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39 description=kube-system/kube-proxy-6qbl7/kube-proxy id=6a2ca16b-6c65-4605-9a23-046cd12627ec name=/runtime.v1.RuntimeService/StartContainer sandboxID=5df2cfa2a164b5fbd9a8fbd8005c71afe227f36356516bd6c4107bcee9235dc9
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.475706845Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.478996084Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.479032244Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.479054874Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.482237101Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.482274557Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.482297442Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.4853634Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.485396229Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.485418966Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.488228807Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.488261005Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.488282674Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.491461644Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 12 21:39:50 pause-634913 crio[2082]: time="2025-12-12T21:39:50.491494202Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	04177a1e4771a       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   23 seconds ago       Running             kube-scheduler            1                   f07a83bea212c       kube-scheduler-pause-634913            kube-system
	9a597043c3e90       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   23 seconds ago       Running             kube-controller-manager   1                   781e9bf80ff13       kube-controller-manager-pause-634913   kube-system
	089141a6f9a07       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   23 seconds ago       Running             coredns                   1                   9839e4e3f7029       coredns-66bc5c9577-ckvjv               kube-system
	dfe8faafc64ee       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   23 seconds ago       Running             kube-proxy                1                   5df2cfa2a164b       kube-proxy-6qbl7                       kube-system
	a1104339af170       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   24 seconds ago       Running             kindnet-cni               1                   4065e64f9fdaa       kindnet-klcm9                          kube-system
	15f89e22e178c       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   24 seconds ago       Running             etcd                      1                   74273bab88c97       etcd-pause-634913                      kube-system
	9058d5f4ba613       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   24 seconds ago       Running             kube-apiserver            1                   cfcb358fb8efb       kube-apiserver-pause-634913            kube-system
	af50e0432616e       138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc   35 seconds ago       Exited              coredns                   0                   9839e4e3f7029       coredns-66bc5c9577-ckvjv               kube-system
	65c4628f15cc9       b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c   About a minute ago   Exited              kindnet-cni               0                   4065e64f9fdaa       kindnet-klcm9                          kube-system
	a4e451f1e032b       94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786   About a minute ago   Exited              kube-proxy                0                   5df2cfa2a164b       kube-proxy-6qbl7                       kube-system
	d0fab9b020f0d       1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2   About a minute ago   Exited              kube-controller-manager   0                   781e9bf80ff13       kube-controller-manager-pause-634913   kube-system
	28b7c58c880a7       b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7   About a minute ago   Exited              kube-apiserver            0                   cfcb358fb8efb       kube-apiserver-pause-634913            kube-system
	76cd552679907       4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949   About a minute ago   Exited              kube-scheduler            0                   f07a83bea212c       kube-scheduler-pause-634913            kube-system
	524fbadbdc48e       2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42   About a minute ago   Exited              etcd                      0                   74273bab88c97       etcd-pause-634913                      kube-system
	
	
	==> coredns [089141a6f9a07e9172e41fd06fc0e7f4302cf7f86789a3b911d66cfb1745662a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54927 - 35359 "HINFO IN 4574428601002878109.8787081905462397161. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022515276s
	
	
	==> coredns [af50e0432616e5578e5a23191899d072d7fe0365c765d16d22ac3d347d2d9094] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:48947 - 46692 "HINFO IN 5867580270348932935.4404528639309647819. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023784089s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-634913
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=pause-634913
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=fac24e5a1017f536a280237ccf94d8ac57d81300
	                    minikube.k8s.io/name=pause-634913
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_12T21_38_41_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 12 Dec 2025 21:38:37 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-634913
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 12 Dec 2025 21:39:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:38:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:38:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:38:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 12 Dec 2025 21:39:28 +0000   Fri, 12 Dec 2025 21:39:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-634913
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 6f43eb6576a1d4bf28a3eab5693b7c4c
	  System UUID:                df27309a-63dc-4ade-947a-7ed260135648
	  Boot ID:                    f10c26e5-8345-4dae-abf5-c7a3da7c7673
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.2
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-ckvjv                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     79s
	  kube-system                 etcd-pause-634913                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         84s
	  kube-system                 kindnet-klcm9                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      79s
	  kube-system                 kube-apiserver-pause-634913             250m (12%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-controller-manager-pause-634913    200m (10%)    0 (0%)      0 (0%)           0 (0%)         84s
	  kube-system                 kube-proxy-6qbl7                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         79s
	  kube-system                 kube-scheduler-pause-634913             100m (5%)     0 (0%)      0 (0%)           0 (0%)         84s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 76s                kube-proxy       
	  Normal   Starting                 19s                kube-proxy       
	  Normal   NodeHasSufficientPID     91s (x8 over 91s)  kubelet          Node pause-634913 status is now: NodeHasSufficientPID
	  Warning  CgroupV1                 91s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  91s (x8 over 91s)  kubelet          Node pause-634913 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    91s (x8 over 91s)  kubelet          Node pause-634913 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 91s                kubelet          Starting kubelet.
	  Normal   Starting                 84s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 84s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  84s                kubelet          Node pause-634913 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    84s                kubelet          Node pause-634913 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     84s                kubelet          Node pause-634913 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           80s                node-controller  Node pause-634913 event: Registered Node pause-634913 in Controller
	  Normal   NodeReady                36s                kubelet          Node pause-634913 status is now: NodeReady
	  Normal   RegisteredNode           16s                node-controller  Node pause-634913 event: Registered Node pause-634913 in Controller
	
	
	==> dmesg <==
	[Dec12 20:57] overlayfs: idmapped layers are currently not supported
	[  +3.790478] overlayfs: idmapped layers are currently not supported
	[Dec12 21:05] overlayfs: idmapped layers are currently not supported
	[  +3.613273] overlayfs: idmapped layers are currently not supported
	[Dec12 21:06] overlayfs: idmapped layers are currently not supported
	[Dec12 21:07] overlayfs: idmapped layers are currently not supported
	[ +26.617506] overlayfs: idmapped layers are currently not supported
	[Dec12 21:09] overlayfs: idmapped layers are currently not supported
	[Dec12 21:13] overlayfs: idmapped layers are currently not supported
	[Dec12 21:14] overlayfs: idmapped layers are currently not supported
	[Dec12 21:15] overlayfs: idmapped layers are currently not supported
	[Dec12 21:16] overlayfs: idmapped layers are currently not supported
	[Dec12 21:17] overlayfs: idmapped layers are currently not supported
	[Dec12 21:19] overlayfs: idmapped layers are currently not supported
	[ +26.409125] overlayfs: idmapped layers are currently not supported
	[Dec12 21:20] overlayfs: idmapped layers are currently not supported
	[ +45.357391] overlayfs: idmapped layers are currently not supported
	[Dec12 21:21] overlayfs: idmapped layers are currently not supported
	[ +55.414462] overlayfs: idmapped layers are currently not supported
	[Dec12 21:22] overlayfs: idmapped layers are currently not supported
	[Dec12 21:23] overlayfs: idmapped layers are currently not supported
	[Dec12 21:24] overlayfs: idmapped layers are currently not supported
	[Dec12 21:26] overlayfs: idmapped layers are currently not supported
	[Dec12 21:27] overlayfs: idmapped layers are currently not supported
	[Dec12 21:38] overlayfs: idmapped layers are currently not supported
	
	
	==> etcd [15f89e22e178c3af539227dee8fefc1f4883f34271b0332253a4f08509f0c879] <==
	{"level":"warn","ts":"2025-12-12T21:39:43.029713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.057874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.077581Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.110483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.119913Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.143627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.164052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.179037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.194574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.254005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.251370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.305767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.311857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.329237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.348127Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.369499Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.384661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.402050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.423537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.437738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.463764Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.494198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.509293Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.531969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:39:43.677770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42136","server-name":"","error":"EOF"}
	
	
	==> etcd [524fbadbdc48e9b2e13c860539bd61a4c7b22c37117dc771d7b59db12046b288] <==
	{"level":"warn","ts":"2025-12-12T21:38:36.572114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.594247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.631762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44902","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.690539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.708752Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.724080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-12T21:38:36.873995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44992","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-12T21:39:32.209268Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-12T21:39:32.209334Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"pause-634913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-12-12T21:39:32.209432Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T21:39:32.352801Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-12T21:39:32.352893Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.352949Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-12-12T21:39:32.353059Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353078Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353138Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T21:39:32.353148Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.353121Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353205Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-12T21:39:32.353219Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-12T21:39:32.353226Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.354915Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-12-12T21:39:32.355008Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-12T21:39:32.355090Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-12T21:39:32.355124Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"pause-634913","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 21:40:04 up  4:22,  0 user,  load average: 2.49, 1.62, 1.56
	Linux pause-634913 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [65c4628f15cc95be90c08bf36a69f6cb1b76eee884867060f1e71c3247881865] <==
	I1212 21:38:47.423927       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 21:38:47.512690       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1212 21:38:47.512836       1 main.go:148] setting mtu 1500 for CNI 
	I1212 21:38:47.512854       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 21:38:47.512865       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T21:38:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 21:38:47.622817       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 21:38:47.712501       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 21:38:47.712601       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 21:38:47.713519       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1212 21:39:17.623006       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1212 21:39:17.713705       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1212 21:39:17.713705       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1212 21:39:17.713917       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1212 21:39:19.013394       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 21:39:19.013428       1 metrics.go:72] Registering metrics
	I1212 21:39:19.013489       1 controller.go:711] "Syncing nftables rules"
	I1212 21:39:27.628597       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 21:39:27.628658       1 main.go:301] handling current node
	
	
	==> kindnet [a1104339af1705164883992605c6d239c40cc3820c8e583be873847c54a5fdaf] <==
	I1212 21:39:40.308602       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1212 21:39:40.308840       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1212 21:39:40.308967       1 main.go:148] setting mtu 1500 for CNI 
	I1212 21:39:40.308978       1 main.go:178] kindnetd IP family: "ipv4"
	I1212 21:39:40.308992       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-12T21:39:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1212 21:39:40.486837       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1212 21:39:40.486869       1 controller.go:381] "Waiting for informer caches to sync"
	I1212 21:39:40.486879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1212 21:39:40.487217       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1212 21:39:44.888541       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1212 21:39:44.888767       1 metrics.go:72] Registering metrics
	I1212 21:39:44.888862       1 controller.go:711] "Syncing nftables rules"
	I1212 21:39:50.475362       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 21:39:50.475417       1 main.go:301] handling current node
	I1212 21:40:00.484730       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1212 21:40:00.484820       1 main.go:301] handling current node
	
	
	==> kube-apiserver [28b7c58c880a7ef000a9e96201895fc8fa32c3ff7f5ec9ac86b3ccc8f337cb63] <==
	W1212 21:39:32.232935       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.232983       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1212 21:39:32.233105       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	W1212 21:39:32.234807       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234886       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234886       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234936       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234954       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.234986       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235007       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235042       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235073       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235092       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235135       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235154       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235193       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235204       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235245       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235261       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235296       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235322       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235371       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235410       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1212 21:39:32.235424       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9058d5f4ba61315fb9ccc3747fdcbdafa77a1fc71db2d8f2d8363f05cef61c0a] <==
	I1212 21:39:44.826268       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1212 21:39:44.832183       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1212 21:39:44.832275       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1212 21:39:44.833461       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1212 21:39:44.833605       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1212 21:39:44.833706       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1212 21:39:44.833786       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1212 21:39:44.835212       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1212 21:39:44.835294       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1212 21:39:44.835358       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1212 21:39:44.843324       1 aggregator.go:171] initial CRD sync complete...
	I1212 21:39:44.843415       1 autoregister_controller.go:144] Starting autoregister controller
	I1212 21:39:44.843445       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1212 21:39:44.844122       1 cache.go:39] Caches are synced for autoregister controller
	I1212 21:39:44.894980       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1212 21:39:44.912506       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1212 21:39:44.912606       1 policy_source.go:240] refreshing policies
	E1212 21:39:44.912923       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1212 21:39:44.917100       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1212 21:39:45.434590       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1212 21:39:46.703761       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1212 21:39:48.103825       1 controller.go:667] quota admission added evaluator for: endpoints
	I1212 21:39:48.202807       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1212 21:39:48.397303       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1212 21:39:48.501209       1 controller.go:667] quota admission added evaluator for: deployments.apps
	
	
	==> kube-controller-manager [9a597043c3e90bc5d681d827f790f21674bf6ae8e339a465203940e08aa83fa9] <==
	I1212 21:39:48.139038       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1212 21:39:48.139139       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 21:39:48.139725       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1212 21:39:48.139821       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1212 21:39:48.141461       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1212 21:39:48.141519       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1212 21:39:48.141502       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1212 21:39:48.141692       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1212 21:39:48.152114       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:39:48.154479       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:39:48.173125       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1212 21:39:48.180416       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1212 21:39:48.188921       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1212 21:39:48.197788       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 21:39:48.197986       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 21:39:48.198513       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-634913"
	I1212 21:39:48.198612       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1212 21:39:48.199117       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:39:48.199385       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 21:39:48.199394       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 21:39:48.204083       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 21:39:48.204312       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 21:39:48.204494       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 21:39:48.204530       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 21:39:48.204538       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	
	
	==> kube-controller-manager [d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4] <==
	I1212 21:38:44.741906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1212 21:38:44.741982       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1212 21:38:44.743159       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1212 21:38:44.745528       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:38:44.746647       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1212 21:38:44.772424       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1212 21:38:44.785601       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1212 21:38:44.785693       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1212 21:38:44.785703       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1212 21:38:44.785787       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-634913"
	I1212 21:38:44.785614       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1212 21:38:44.785869       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:38:44.785950       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1212 21:38:44.785978       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1212 21:38:44.786714       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1212 21:38:44.786775       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1212 21:38:44.786957       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1212 21:38:44.795287       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1212 21:38:44.795362       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1212 21:38:44.795387       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1212 21:38:44.795401       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1212 21:38:44.795407       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1212 21:38:44.801226       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1212 21:38:44.821758       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-634913" podCIDRs=["10.244.0.0/24"]
	I1212 21:39:30.114328       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a4e451f1e032b800e0cc40a399f4d41663317a67b1b0f6595a114ab35837c89a] <==
	I1212 21:38:47.368501       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:38:47.451013       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:38:47.551895       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:38:47.552008       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1212 21:38:47.552119       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:38:47.575660       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:38:47.575777       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:38:47.579225       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:38:47.579611       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:38:47.579832       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:38:47.581563       1 config.go:200] "Starting service config controller"
	I1212 21:38:47.581640       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:38:47.581697       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:38:47.581725       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:38:47.581759       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:38:47.581786       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:38:47.583882       1 config.go:309] "Starting node config controller"
	I1212 21:38:47.584687       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:38:47.584751       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:38:47.682386       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:38:47.682390       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:38:47.682429       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [dfe8faafc64eea9de62c15fd70a418666f4b5237c83b63a0c38301897973db39] <==
	I1212 21:39:43.714483       1 server_linux.go:53] "Using iptables proxy"
	I1212 21:39:44.308201       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1212 21:39:44.932683       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1212 21:39:44.936433       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1212 21:39:44.941801       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1212 21:39:45.177406       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1212 21:39:45.177477       1 server_linux.go:132] "Using iptables Proxier"
	I1212 21:39:45.182682       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1212 21:39:45.183028       1 server.go:527] "Version info" version="v1.34.2"
	I1212 21:39:45.183056       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:39:45.184457       1 config.go:200] "Starting service config controller"
	I1212 21:39:45.184484       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1212 21:39:45.189808       1 config.go:106] "Starting endpoint slice config controller"
	I1212 21:39:45.189935       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1212 21:39:45.190007       1 config.go:403] "Starting serviceCIDR config controller"
	I1212 21:39:45.190048       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1212 21:39:45.194857       1 config.go:309] "Starting node config controller"
	I1212 21:39:45.200104       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1212 21:39:45.200218       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1212 21:39:45.284731       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1212 21:39:45.291491       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1212 21:39:45.291684       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [04177a1e4771a3b279af00e0622107164d8c5dd06ac5b16bd5e02edc091c1d49] <==
	I1212 21:39:43.188995       1 serving.go:386] Generated self-signed cert in-memory
	I1212 21:39:45.582506       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.2"
	I1212 21:39:45.582638       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1212 21:39:45.594180       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1212 21:39:45.594298       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1212 21:39:45.594374       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:45.594439       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:45.594533       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 21:39:45.594566       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 21:39:45.595737       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1212 21:39:45.595819       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1212 21:39:45.695345       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1212 21:39:45.695494       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1212 21:39:45.695606       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe] <==
	E1212 21:38:37.777709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1212 21:38:37.777777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:38:38.618014       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1212 21:38:38.620282       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1212 21:38:38.662485       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1212 21:38:38.720034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1212 21:38:38.789078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1212 21:38:38.799802       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1212 21:38:38.820429       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1212 21:38:38.835241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1212 21:38:38.867657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1212 21:38:38.907657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1212 21:38:38.937690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1212 21:38:39.032160       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1212 21:38:39.050900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1212 21:38:39.113694       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1212 21:38:39.177366       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1212 21:38:39.361975       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1212 21:38:41.457581       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:32.208799       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1212 21:39:32.208820       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1212 21:39:32.208841       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1212 21:39:32.208868       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1212 21:39:32.209022       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1212 21:39:32.209039       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.020584    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qbl7\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="730f0fa7-551b-4674-ab46-dafb588f985c" pod="kube-system/kube-proxy-6qbl7"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.020875    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-ckvjv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97f3d46a-98b9-449a-b0fa-f44cf663939d" pod="kube-system/coredns-66bc5c9577-ckvjv"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: I1212 21:39:40.052213    1323 scope.go:117] "RemoveContainer" containerID="d0fab9b020f0dd464170e1bae896878b70b11d803849b796b85ccc8919abdbb4"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.052959    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qbl7\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="730f0fa7-551b-4674-ab46-dafb588f985c" pod="kube-system/kube-proxy-6qbl7"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053185    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/coredns-66bc5c9577-ckvjv\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="97f3d46a-98b9-449a-b0fa-f44cf663939d" pod="kube-system/coredns-66bc5c9577-ckvjv"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053451    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="5762c8cd1a27671205a64fe2b09ac7f5" pod="kube-system/kube-scheduler-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053751    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/etcd-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="550018b5380a5824cf7104c0b1f6f137" pod="kube-system/etcd-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.053991    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="d8da9ad8465ec53bb8ac738d2d9e8ac9" pod="kube-system/kube-apiserver-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.054316    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-634913\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="f4d058c83c6449560b36cdcc47554d73" pod="kube-system/kube-controller-manager-pause-634913"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: E1212 21:39:40.054651    1323 status_manager.go:1018] "Failed to get status for pod" err="Get \"https://192.168.85.2:8443/api/v1/namespaces/kube-system/pods/kindnet-klcm9\": dial tcp 192.168.85.2:8443: connect: connection refused" podUID="81d3d855-8a49-4ca3-af27-1694f53c05c6" pod="kube-system/kindnet-klcm9"
	Dec 12 21:39:40 pause-634913 kubelet[1323]: I1212 21:39:40.087876    1323 scope.go:117] "RemoveContainer" containerID="76cd55267990721a658b7986b785e1c9f8486c092af63ee52fa6bab967adf8fe"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.621554    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-controller-manager-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="f4d058c83c6449560b36cdcc47554d73" pod="kube-system/kube-controller-manager-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.622186    1323 reflector.go:205] "Failed to watch" err="configmaps \"kube-proxy\" is forbidden: User \"system:node:pause-634913\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.622266    1323 reflector.go:205] "Failed to watch" err="configmaps \"coredns\" is forbidden: User \"system:node:pause-634913\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"coredns\"" type="*v1.ConfigMap"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.677852    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kindnet-klcm9\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="81d3d855-8a49-4ca3-af27-1694f53c05c6" pod="kube-system/kindnet-klcm9"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.778122    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-proxy-6qbl7\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="730f0fa7-551b-4674-ab46-dafb588f985c" pod="kube-system/kube-proxy-6qbl7"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.796622    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"coredns-66bc5c9577-ckvjv\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="97f3d46a-98b9-449a-b0fa-f44cf663939d" pod="kube-system/coredns-66bc5c9577-ckvjv"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.802214    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-scheduler-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="5762c8cd1a27671205a64fe2b09ac7f5" pod="kube-system/kube-scheduler-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.810342    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"etcd-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="550018b5380a5824cf7104c0b1f6f137" pod="kube-system/etcd-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.811779    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="d8da9ad8465ec53bb8ac738d2d9e8ac9" pod="kube-system/kube-apiserver-pause-634913"
	Dec 12 21:39:44 pause-634913 kubelet[1323]: E1212 21:39:44.829235    1323 status_manager.go:1018] "Failed to get status for pod" err="pods \"kube-apiserver-pause-634913\" is forbidden: User \"system:node:pause-634913\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'pause-634913' and this object" podUID="d8da9ad8465ec53bb8ac738d2d9e8ac9" pod="kube-system/kube-apiserver-pause-634913"
	Dec 12 21:39:50 pause-634913 kubelet[1323]: W1212 21:39:50.958539    1323 conversion.go:112] Could not get instant cpu stats: cumulative stats decrease
	Dec 12 21:39:58 pause-634913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 12 21:39:58 pause-634913 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 12 21:39:58 pause-634913 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-634913 -n pause-634913
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p pause-634913 -n pause-634913: exit status 2 (387.194196ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-634913 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7200.086s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:01:06.207634  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/default-k8s-diff-port-540143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:02:35.805180  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:02:44.061009  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:03:08.507159  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/old-k8s-version-636603/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:04:19.913613  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:04:31.573114  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/old-k8s-version-636603/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:04:36.832570  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:05:38.505052  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/default-k8s-diff-port-540143/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:07:35.804999  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:07:44.061821  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
E1212 22:08:08.507239  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/old-k8s-version-636603/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.76.2:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.76.2:8443: connect: connection refused
panic: test timed out after 2h0m0s
	running tests:
		TestNetworkPlugins (29m8s)
		TestNetworkPlugins/group/auto (43s)
		TestNetworkPlugins/group/auto/Start (43s)
		TestStartStop (31m24s)
		TestStartStop/group/no-preload (24m48s)
		TestStartStop/group/no-preload/serial (24m48s)
		TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (8m27s)

                                                
                                                
goroutine 5401 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2682 +0x2b0
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x38

                                                
                                                
goroutine 1 [chan receive, 24 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x40003aac40, 0x40008fbbb8)
	/usr/local/go/src/testing/testing.go:1940 +0x104
testing.runTests(0x400000e030, {0x534c680, 0x2c, 0x2c}, {0x40008fbd08?, 0x125774?, 0x5375080?})
	/usr/local/go/src/testing/testing.go:2475 +0x3b8
testing.(*M).Run(0x4000676a00)
	/usr/local/go/src/testing/testing.go:2337 +0x530
k8s.io/minikube/test/integration.TestMain(0x4000676a00)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:64 +0xf0
main.main()
	_testmain.go:133 +0x88

                                                
                                                
goroutine 4299 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40000a1f40, 0x40000a1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0xb8?, 0x40000a1f40, 0x40000a1f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x400038b730?, 0x40004565f0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001b21200?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 186 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 185
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1072 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001610300, 0x4004effab0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1071
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 4069 [chan receive, 20 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006a6d80, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4095
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 166 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000234080?}, 0x40006c1380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 177
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 185 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40000a5f40, 0x400134af88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x0?, 0x40000a5f40, 0x40000a5f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000766600?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 167
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4099 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40018ec490, 0x13)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018ec480)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006a6d80)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40017a6ee0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x4001ddcea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x40006b2f38, {0x369e520, 0x40016fa600}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001ddcfa8?, {0x369e520?, 0x40016fa600?}, 0xe0?, 0x4001b8d980?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016e2d00, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4069
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 184 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40006c0b10, 0x2d)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006c0b00)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40006a6c60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40013dfe88?, 0x2a0ac?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0xffff958ff5c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x4000120f38, {0x369e520, 0x4004f1ebd0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x0?, {0x369e520?, 0x4004f1ebd0?}, 0xa0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4004ee08b0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 167
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1501 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40018ecd90, 0x24)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40018ecd80)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x4001420600)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40002db570?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x400138ff38, {0x369e520, 0x40018b9bf0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40018b9bf0?}, 0x60?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x400186d460, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1526
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4288 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40017f5080, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 4286
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 660 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0xffff4ea31000, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40001be580?, 0x297c4?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40001be580)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40001be580)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x4000620a00)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x4000620a00)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x40004dc400, {0x36d4000, 0x4000620a00})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x40004dc400)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 658
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 167 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40006a6c60, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 177
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 992 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x4001311680, 0x4000083d50)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 991
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1052 [chan send, 109 minutes]:
os/exec.(*Cmd).watchCtx(0x40014fa180, 0x400146e5b0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 764
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1526 [chan receive, 82 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x4001420600, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 1524
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1894 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4000397380, 0x4004eff1f0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1893
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3818 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3817
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 852 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40013e1740, 0x4001439f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x8?, 0x40013e1740, 0x40013e1788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x4001da2c00?, 0x4001dbe3c0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4001586e00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 850
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 4100 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x4001ddc740, 0x4001ddc788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0xf5?, 0x4001ddc740, 0x4001ddc788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x0?, 0x4001ddc750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x4000234080?, 0x400133cd80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4069
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3272 [chan receive, 29 minutes]:
testing.(*T).Run(0x40016f0000, {0x296d71f?, 0xd5b730e545e?}, 0x4001358048)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins(0x40016f0000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xe4
testing.tRunner(0x40016f0000, 0x339baf0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5316 [select]:
os/exec.(*Cmd).watchCtx(0x4000767500, 0x4000118e70)
	/usr/local/go/src/os/exec/exec.go:789 +0x70
created by os/exec.(*Cmd).Start in goroutine 5313
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1864 [chan send, 80 minutes]:
os/exec.(*Cmd).watchCtx(0x4000396a80, 0x4004efe7e0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1863
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 3565 [chan receive, 29 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x400150a000, 0x4001358048)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3272
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3786 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40017f4de0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3812
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 3817 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40013df740, 0x4001436f88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x6e?, 0x40013df740, 0x40013df788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x0?, 0x40013df750?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x36f42d0?, 0x4000234080?, 0x4001b8c480?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3786
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 5313 [syscall]:
syscall.Syscall6(0x5f, 0x3, 0x10, 0x400130ec38, 0x4, 0x400075c900, 0x0)
	/usr/local/go/src/syscall/syscall_linux.go:96 +0x2c
internal/syscall/unix.Waitid(0x400130ed98?, 0x1929a0?, 0xffffd384d1a3?, 0x0?, 0x40002e0fd0?)
	/usr/local/go/src/internal/syscall/unix/waitid_linux.go:18 +0x44
os.(*Process).pidfdWait.func1(...)
	/usr/local/go/src/os/pidfd_linux.go:109
os.ignoringEINTR(...)
	/usr/local/go/src/os/file_posix.go:256
os.(*Process).pidfdWait(0x40018ec4c0)
	/usr/local/go/src/os/pidfd_linux.go:108 +0x144
os.(*Process).wait(0x400130ed68?)
	/usr/local/go/src/os/exec_unix.go:25 +0x24
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:340
os/exec.(*Cmd).Wait(0x4000767500)
	/usr/local/go/src/os/exec/exec.go:922 +0x38
os/exec.(*Cmd).Run(0x4000767500)
	/usr/local/go/src/os/exec/exec.go:626 +0x38
k8s.io/minikube/test/integration.Run(0x4001618380, 0x4000767500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:104 +0x154
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0x4001618380)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x44
testing.tRunner(0x4001618380, 0x4001445c50)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3566
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 849 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000234080?}, 0x40016181c0?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 848
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 851 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40008ec0d0, 0x2b)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40008ec0c0)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40017f4cc0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40014594a0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x40013a0f38, {0x369e520, 0x40014525a0}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x11?, {0x369e520?, 0x40014525a0?}, 0x50?, 0x4001b5cf00?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x4001345470, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 850
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 853 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 852
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 850 [chan receive, 111 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).run(0x40017f4cc0, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:151 +0x218
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 848
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cache.go:126 +0x4d0

                                                
                                                
goroutine 1503 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 1502
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 3785 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000234080?}, 0x4001b8c480?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 3812
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 1988 [chan send, 79 minutes]:
os/exec.(*Cmd).watchCtx(0x400133c600, 0x40017a6cb0)
	/usr/local/go/src/os/exec/exec.go:814 +0x280
created by os/exec.(*Cmd).Start in goroutine 1460
	/usr/local/go/src/os/exec/exec.go:775 +0x678

                                                
                                                
goroutine 1263 [IO wait, 109 minutes]:
internal/poll.runtime_pollWait(0xffff4ee39800, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40017dda80?, 0xdbd0c?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0x40017dda80)
	/usr/local/go/src/internal/poll/fd_unix.go:613 +0x21c
net.(*netFD).accept(0x40017dda80)
	/usr/local/go/src/net/fd_unix.go:161 +0x28
net.(*TCPListener).accept(0x40018ec080)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x24
net.(*TCPListener).Accept(0x40018ec080)
	/usr/local/go/src/net/tcpsock.go:380 +0x2c
net/http.(*Server).Serve(0x4001582000, {0x36d4000, 0x40018ec080})
	/usr/local/go/src/net/http/server.go:3463 +0x24c
net/http.(*Server).ListenAndServe(0x4001582000)
	/usr/local/go/src/net/http/server.go:3389 +0x80
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2218
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 1261
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2217 +0x104

                                                
                                                
goroutine 4068 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000234080?}, 0x400133cd80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4095
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 5315 [IO wait]:
internal/poll.runtime_pollWait(0xffff4ee39e00, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40016ef1a0?, 0x40014f428e?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40016ef1a0, {0x40014f428e, 0x9d72, 0x9d72})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6df0, {0x40014f428e?, 0x4001712568?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001445f20, {0x369c8e8, 0x400071e198})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x4001445f20}, {0x369c8e8, 0x400071e198}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6df0?, {0x369cae0, 0x4001445f20})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6df0, {0x369cae0, 0x4001445f20})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x4001445f20}, {0x369c968, 0x40000a6df0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4000767380?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5313
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 3816 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0x40006c0110, 0x16)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006c0100)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40017f4de0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x40000832d0?, 0x1618bc?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x4001ddaea8?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x40006b1f38, {0x369e520, 0x4001f9e390}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x4001ddafa8?, {0x369e520?, 0x4001f9e390?}, 0x50?, 0x4000399730?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40017582a0, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 3786
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 1502 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x36e69b0, 0x4000082070}, 0x40013e2740, 0x400130cf88)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/wait.go:210 +0xac
k8s.io/apimachinery/pkg/util/wait.poll({0x36e69b0, 0x4000082070}, 0x31?, 0x40013e2740, 0x40013e2788)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:260 +0x8c
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x36e69b0?, 0x4000082070?}, 0x161f90?, 0x40016c8700?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:200 +0x40
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x95c64?, 0x4000397500?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 1526
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:146 +0x20c

                                                
                                                
goroutine 3646 [chan receive, 29 minutes]:
testing.(*testState).waitParallel(0x4000724190)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x400150bdc0)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x400150bdc0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400150bdc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x400150bdc0, 0x40006d1200)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3565
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4101 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4100
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                                
goroutine 1136 [select, 109 minutes]:
net/http.(*persistConn).writeLoop(0x4001a6cb40)
	/usr/local/go/src/net/http/transport.go:2600 +0x94
created by net/http.(*Transport).dialConn in goroutine 1175
	/usr/local/go/src/net/http/transport.go:1948 +0x1164

                                                
                                                
goroutine 3647 [chan receive, 29 minutes]:
testing.(*testState).waitParallel(0x4000724190)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x4001587a40)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x4001587a40)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x4001587a40)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x4001587a40, 0x40006d1280)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3565
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3308 [chan receive, 31 minutes]:
testing.(*T).Run(0x40016c8c40, {0x296d71f?, 0x4001308f58?}, 0x339bd20)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop(0x40016c8c40)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x3c
testing.tRunner(0x40016c8c40, 0x339bb38)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 1135 [select, 109 minutes]:
net/http.(*persistConn).readLoop(0x4001a6cb40)
	/usr/local/go/src/net/http/transport.go:2398 +0xa6c
created by net/http.(*Transport).dialConn in goroutine 1175
	/usr/local/go/src/net/http/transport.go:1947 +0x111c

                                                
                                                
goroutine 1525 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000234080?}, 0x4000397380?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 1524
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 4286 [select]:
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x36e6618, 0x400035b180}, {0x36d4660, 0x4001371f20}, 0x1, 0x0, 0x4001483be0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/loop.go:66 +0x158
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x36e6618?, 0x40002fe930?}, 0x3b9aca00, 0x4001483e08?, 0x1, 0x4001483be0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:48 +0x8c
k8s.io/minikube/test/integration.PodWait({0x36e6618, 0x40002fe930}, 0x4001618a80, {0x4001eb2018, 0x11}, {0x29941e1, 0x14}, {0x29ac150, 0x1c}, 0x7dba821800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:380 +0x22c
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x36e6618, 0x40002fe930}, 0x4001618a80, {0x4001eb2018, 0x11}, {0x29786f9?, 0x1496c5a200161e84?}, {0x693c90a1?, 0x4001438f58?}, {0x161f08?, ...})
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:272 +0xf8
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0x4001618a80?)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:154 +0x44
testing.tRunner(0x4001618a80, 0x4000412d80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3928
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3515 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1891 +0x3d0
testing.tRunner(0x4001586e00, 0x339bd20)
	/usr/local/go/src/testing/testing.go:1940 +0x104
created by testing.(*T).Run in goroutine 3308
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3698 [chan receive, 29 minutes]:
testing.(*testState).waitParallel(0x4000724190)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016c8000)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016c8000)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016c8000)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016c8000, 0x40017dcb80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3565
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3644 [chan receive, 29 minutes]:
testing.(*testState).waitParallel(0x4000724190)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x400150ae00)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x400150ae00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400150ae00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x400150ae00, 0x40006d1100)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3565
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3566 [chan receive]:
testing.(*T).Run(0x400150a700, {0x296d724?, 0x368adf0?}, 0x4001445c50)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400150a700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x4f4
testing.tRunner(0x400150a700, 0x40006d0a80)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3565
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 4287 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x36ff660, {{0x36f42d0, 0x4000234080?}, 0x4001618a80?})
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:320 +0x288
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 4286
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/delaying_queue.go:157 +0x204

                                                
                                                
goroutine 3699 [chan receive, 29 minutes]:
testing.(*testState).waitParallel(0x4000724190)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x40016c8700)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x40016c8700)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x40016c8700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x40016c8700, 0x40017dcc00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3565
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3519 [chan receive, 24 minutes]:
testing.(*T).Run(0x4001587500, {0x296eb91?, 0x0?}, 0x4000412d00)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1(0x4001587500)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:128 +0x7e4
testing.tRunner(0x4001587500, 0x40019aa5c0)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3515
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3928 [chan receive, 8 minutes]:
testing.(*T).Run(0x40016181c0, {0x299a203?, 0x40000006ee?}, 0x4000412d80)
	/usr/local/go/src/testing/testing.go:2005 +0x378
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0x40016181c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:153 +0x1b8
testing.tRunner(0x40016181c0, 0x4000412d00)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3519
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 3645 [chan receive, 29 minutes]:
testing.(*testState).waitParallel(0x4000724190)
	/usr/local/go/src/testing/testing.go:2116 +0x158
testing.(*T).Parallel(0x400150b180)
	/usr/local/go/src/testing/testing.go:1709 +0x19c
k8s.io/minikube/test/integration.MaybeParallel(0x400150b180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:501 +0x5c
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0x400150b180)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x2c0
testing.tRunner(0x400150b180, 0x40006d1180)
	/usr/local/go/src/testing/testing.go:1934 +0xc8
created by testing.(*T).Run in goroutine 3565
	/usr/local/go/src/testing/testing.go:1997 +0x364

                                                
                                                
goroutine 5314 [IO wait]:
internal/poll.runtime_pollWait(0xffff4ee39400, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0xa0
internal/poll.(*pollDesc).wait(0x40016ef0e0?, 0x400142e345?, 0x1)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x28
internal/poll.(*pollDesc).waitRead(...)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0x40016ef0e0, {0x400142e345, 0x4bb, 0x4bb})
	/usr/local/go/src/internal/poll/fd_unix.go:165 +0x1e0
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0x40000a6dd8, {0x400142e345?, 0x4001719d68?, 0x8b27c?})
	/usr/local/go/src/os/file.go:144 +0x68
bytes.(*Buffer).ReadFrom(0x4001445e60, {0x369c8e8, 0x400071e188})
	/usr/local/go/src/bytes/buffer.go:217 +0x90
io.copyBuffer({0x369cae0, 0x4001445e60}, {0x369c8e8, 0x400071e188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x14c
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x40000a6dd8?, {0x369cae0, 0x4001445e60})
	/usr/local/go/src/os/file.go:295 +0x58
os.(*File).WriteTo(0x40000a6dd8, {0x369cae0, 0x4001445e60})
	/usr/local/go/src/os/file.go:273 +0x9c
io.copyBuffer({0x369cae0, 0x4001445e60}, {0x369c968, 0x40000a6dd8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x98
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x40
os/exec.(*Cmd).Start.func2(0x4001618380?)
	/usr/local/go/src/os/exec/exec.go:749 +0x30
created by os/exec.(*Cmd).Start in goroutine 5313
	/usr/local/go/src/os/exec/exec.go:748 +0x6a4

                                                
                                                
goroutine 4298 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0x40006c1510, 0x1)
	/usr/local/go/src/runtime/sema.go:606 +0x140
sync.(*Cond).Wait(0x40006c1500)
	/usr/local/go/src/sync/cond.go:71 +0xa4
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3702b60)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/util/workqueue/queue.go:277 +0x80
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0x40017f5080)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:160 +0x38
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:155
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1({0x400035afc0?, 0x0?})
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x24
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext.func1({0x36e69b0?, 0x4000082070?}, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:255 +0x58
k8s.io/apimachinery/pkg/util/wait.BackoffUntilWithContext({0x36e69b0, 0x4000082070}, 0x400130ff38, {0x369e520, 0x40017be030}, 0x1)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:256 +0xac
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0x36f42d0?, {0x369e520?, 0x40017be030?}, 0xa0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:233 +0x4c
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x40016e2070, 0x3b9aca00, 0x0, 0x1, 0x4000082070)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:210 +0x7c
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/backoff.go:163
created by k8s.io/client-go/transport.(*dynamicClientCert).run in goroutine 4288
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.33.4/transport/cert_rotation.go:144 +0x174

                                                
                                                
goroutine 4300 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:297 +0x13c
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 4299
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.33.4/pkg/util/wait/poll.go:280 +0xb8

                                                
                                    

Test pass (232/316)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.52
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.43
9 TestDownloadOnly/v1.28.0/DeleteAll 0.38
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.23
12 TestDownloadOnly/v1.34.2/json-events 5.84
13 TestDownloadOnly/v1.34.2/preload-exists 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.09
18 TestDownloadOnly/v1.34.2/DeleteAll 0.23
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.35.0-beta.0/json-events 6.46
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.61
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 164.32
40 TestAddons/serial/GCPAuth/Namespaces 0.19
41 TestAddons/serial/GCPAuth/FakeCredentials 9.84
57 TestAddons/StoppedEnableDisable 12.39
58 TestCertOptions 39.37
59 TestCertExpiration 332.1
61 TestForceSystemdFlag 40.47
62 TestForceSystemdEnv 44.27
67 TestErrorSpam/setup 29.34
68 TestErrorSpam/start 0.84
69 TestErrorSpam/status 1.16
70 TestErrorSpam/pause 6.08
71 TestErrorSpam/unpause 5.35
72 TestErrorSpam/stop 1.52
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 82.82
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 30.05
79 TestFunctional/serial/KubeContext 0.06
80 TestFunctional/serial/KubectlGetPods 0.1
83 TestFunctional/serial/CacheCmd/cache/add_remote 3.64
84 TestFunctional/serial/CacheCmd/cache/add_local 1.3
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.05
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.91
89 TestFunctional/serial/CacheCmd/cache/delete 0.12
90 TestFunctional/serial/MinikubeKubectlCmd 0.14
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
92 TestFunctional/serial/ExtraConfig 33.21
93 TestFunctional/serial/ComponentHealth 0.1
94 TestFunctional/serial/LogsCmd 1.44
95 TestFunctional/serial/LogsFileCmd 1.56
96 TestFunctional/serial/InvalidService 4.61
98 TestFunctional/parallel/ConfigCmd 0.5
99 TestFunctional/parallel/DashboardCmd 14.07
100 TestFunctional/parallel/DryRun 0.48
101 TestFunctional/parallel/InternationalLanguage 0.23
102 TestFunctional/parallel/StatusCmd 1.1
106 TestFunctional/parallel/ServiceCmdConnect 8.64
107 TestFunctional/parallel/AddonsCmd 0.14
108 TestFunctional/parallel/PersistentVolumeClaim 19.92
110 TestFunctional/parallel/SSHCmd 0.75
111 TestFunctional/parallel/CpCmd 2.4
113 TestFunctional/parallel/FileSync 0.4
114 TestFunctional/parallel/CertSync 2.16
118 TestFunctional/parallel/NodeLabels 0.12
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
122 TestFunctional/parallel/License 0.35
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.7
125 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
127 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.42
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
129 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
133 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
135 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
136 TestFunctional/parallel/ProfileCmd/profile_list 0.44
137 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
138 TestFunctional/parallel/MountCmd/any-port 8.59
139 TestFunctional/parallel/ServiceCmd/List 0.54
140 TestFunctional/parallel/ServiceCmd/JSONOutput 0.53
141 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
142 TestFunctional/parallel/ServiceCmd/Format 0.51
143 TestFunctional/parallel/ServiceCmd/URL 0.38
144 TestFunctional/parallel/MountCmd/specific-port 2.36
145 TestFunctional/parallel/MountCmd/VerifyCleanup 2.69
146 TestFunctional/parallel/Version/short 0.1
147 TestFunctional/parallel/Version/components 1.05
148 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
149 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
150 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
151 TestFunctional/parallel/ImageCommands/ImageListYaml 0.29
152 TestFunctional/parallel/ImageCommands/ImageBuild 3.98
153 TestFunctional/parallel/ImageCommands/Setup 0.64
154 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.02
155 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.05
156 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.16
157 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
158 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
159 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
160 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.55
161 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
162 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
163 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
174 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.11
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 3.64
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 1.13
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.05
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.05
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.32
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 1.96
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.12
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 0.98
190 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 0.98
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 0.5
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 0.42
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.23
202 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.16
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 0.72
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 2.24
208 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.3
209 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 1.72
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.58
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 0.33
220 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0.11
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.38
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.39
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.39
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port 2.08
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup 2.26
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.05
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 0.51
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.23
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.24
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.23
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.23
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 3.53
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.26
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 1.23
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 0.85
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 1.06
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.4
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.58
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 0.79
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.41
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.15
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.14
257 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.16
258 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.05
259 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.02
260 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.02
264 TestMultiControlPlane/serial/StartCluster 197.86
265 TestMultiControlPlane/serial/DeployApp 8.05
266 TestMultiControlPlane/serial/PingHostFromPods 1.56
267 TestMultiControlPlane/serial/AddWorkerNode 59.11
268 TestMultiControlPlane/serial/NodeLabels 0.11
269 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
270 TestMultiControlPlane/serial/CopyFile 20.57
271 TestMultiControlPlane/serial/StopSecondaryNode 12.95
272 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.88
273 TestMultiControlPlane/serial/RestartSecondaryNode 21.17
274 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.28
287 TestJSONOutput/start/Command 53.1
288 TestJSONOutput/start/Audit 0
290 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
291 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
294 TestJSONOutput/pause/Audit 0
296 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
297 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
300 TestJSONOutput/unpause/Audit 0
302 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
303 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
305 TestJSONOutput/stop/Command 5.88
306 TestJSONOutput/stop/Audit 0
308 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
309 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
310 TestErrorJSONOutput 0.25
312 TestKicCustomNetwork/create_custom_network 38.62
313 TestKicCustomNetwork/use_default_bridge_network 36.65
314 TestKicExistingNetwork 35.46
315 TestKicCustomSubnet 36.72
316 TestKicStaticIP 38.6
317 TestMainNoArgs 0.06
318 TestMinikubeProfile 71.73
321 TestMountStart/serial/StartWithMountFirst 9.17
322 TestMountStart/serial/VerifyMountFirst 0.27
323 TestMountStart/serial/StartWithMountSecond 9.68
324 TestMountStart/serial/VerifyMountSecond 0.27
325 TestMountStart/serial/DeleteFirst 1.72
326 TestMountStart/serial/VerifyMountPostDelete 0.3
327 TestMountStart/serial/Stop 1.29
328 TestMountStart/serial/RestartStopped 8.22
329 TestMountStart/serial/VerifyMountPostStop 0.28
332 TestMultiNode/serial/FreshStart2Nodes 136.8
333 TestMultiNode/serial/DeployApp2Nodes 5.49
334 TestMultiNode/serial/PingHostFrom2Pods 0.94
335 TestMultiNode/serial/AddNode 58.77
336 TestMultiNode/serial/MultiNodeLabels 0.09
337 TestMultiNode/serial/ProfileList 0.73
338 TestMultiNode/serial/CopyFile 10.79
339 TestMultiNode/serial/StopNode 2.44
340 TestMultiNode/serial/StartAfterStop 8.18
341 TestMultiNode/serial/RestartKeepsNodes 77.1
342 TestMultiNode/serial/DeleteNode 5.73
343 TestMultiNode/serial/StopMultiNode 24.14
344 TestMultiNode/serial/RestartMultiNode 55.45
345 TestMultiNode/serial/ValidateNameConflict 35.17
350 TestPreload 100.11
352 TestScheduledStopUnix 108.88
355 TestInsufficientStorage 13.25
356 TestRunningBinaryUpgrade 304.6
359 TestMissingContainerUpgrade 116.85
361 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
362 TestNoKubernetes/serial/StartWithK8s 40.46
363 TestNoKubernetes/serial/StartWithStopK8s 8.15
364 TestNoKubernetes/serial/Start 9.9
365 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
366 TestNoKubernetes/serial/VerifyK8sNotRunning 0.46
367 TestNoKubernetes/serial/ProfileList 3.53
368 TestNoKubernetes/serial/Stop 1.39
369 TestNoKubernetes/serial/StartNoArgs 7.9
370 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
371 TestStoppedBinaryUpgrade/Setup 1.4
372 TestStoppedBinaryUpgrade/Upgrade 304.71
373 TestStoppedBinaryUpgrade/MinikubeLogs 2.23
382 TestPause/serial/Start 81.8
383 TestPause/serial/SecondStartNoReconfiguration 27.6
x
+
TestDownloadOnly/v1.28.0/json-events (10.52s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-220862 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-220862 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.516056055s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.52s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1212 20:09:42.994806  364853 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1212 20:09:42.994884  364853 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-220862
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-220862: exit status 85 (429.05484ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-220862 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-220862 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:09:32
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:09:32.528326  364859 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:09:32.528485  364859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:32.528499  364859 out.go:374] Setting ErrFile to fd 2...
	I1212 20:09:32.528504  364859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:32.528788  364859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	W1212 20:09:32.529013  364859 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22112-362983/.minikube/config/config.json: open /home/jenkins/minikube-integration/22112-362983/.minikube/config/config.json: no such file or directory
	I1212 20:09:32.529440  364859 out.go:368] Setting JSON to true
	I1212 20:09:32.530273  364859 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10325,"bootTime":1765559848,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:09:32.530353  364859 start.go:143] virtualization:  
	I1212 20:09:32.535643  364859 out.go:99] [download-only-220862] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1212 20:09:32.535857  364859 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball: no such file or directory
	I1212 20:09:32.535984  364859 notify.go:221] Checking for updates...
	I1212 20:09:32.540016  364859 out.go:171] MINIKUBE_LOCATION=22112
	I1212 20:09:32.543389  364859 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:32.546525  364859 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:09:32.549564  364859 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:09:32.552567  364859 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 20:09:32.558639  364859 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 20:09:32.559009  364859 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:09:32.584386  364859 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:09:32.584507  364859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:32.643609  364859 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-12 20:09:32.634185903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:32.643722  364859 docker.go:319] overlay module found
	I1212 20:09:32.646855  364859 out.go:99] Using the docker driver based on user configuration
	I1212 20:09:32.646888  364859 start.go:309] selected driver: docker
	I1212 20:09:32.646895  364859 start.go:927] validating driver "docker" against <nil>
	I1212 20:09:32.646997  364859 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:32.701477  364859 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-12-12 20:09:32.692531416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:32.701650  364859 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:09:32.701932  364859 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 20:09:32.702083  364859 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 20:09:32.705350  364859 out.go:171] Using Docker driver with root privileges
	I1212 20:09:32.708594  364859 cni.go:84] Creating CNI manager for ""
	I1212 20:09:32.708661  364859 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:09:32.708673  364859 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:09:32.708749  364859 start.go:353] cluster config:
	{Name:download-only-220862 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-220862 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:09:32.711866  364859 out.go:99] Starting "download-only-220862" primary control-plane node in "download-only-220862" cluster
	I1212 20:09:32.711899  364859 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:09:32.714858  364859 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:09:32.714900  364859 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:09:32.715081  364859 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:09:32.731313  364859 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 20:09:32.731497  364859 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 20:09:32.731596  364859 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 20:09:32.769916  364859 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:09:32.769963  364859 cache.go:65] Caching tarball of preloaded images
	I1212 20:09:32.770162  364859 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:09:32.773524  364859 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1212 20:09:32.773557  364859 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1212 20:09:32.864905  364859 preload.go:295] Got checksum from GCS API "e092595ade89dbfc477bd4cd6b9c633b"
	I1212 20:09:32.865040  364859 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e092595ade89dbfc477bd4cd6b9c633b -> /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:09:38.552304  364859 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1212 20:09:38.552789  364859 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/download-only-220862/config.json ...
	I1212 20:09:38.552829  364859 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/download-only-220862/config.json: {Name:mka3329524fb1b982aac5870daababa0e542babd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1212 20:09:38.552995  364859 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1212 20:09:38.553170  364859 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/22112-362983/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-220862 host does not exist
	  To start a cluster, run: "minikube start -p download-only-220862"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-220862
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (5.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-206451 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-206451 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.837439286s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (5.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1212 20:09:49.881994  364853 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
I1212 20:09:49.882031  364853 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-206451
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-206451: exit status 85 (94.251774ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-220862 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-220862 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-220862                                                                                                                                                   │ download-only-220862 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -o=json --download-only -p download-only-206451 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-206451 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:09:44
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:09:44.093491  365060 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:09:44.093607  365060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:44.093618  365060 out.go:374] Setting ErrFile to fd 2...
	I1212 20:09:44.093624  365060 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:44.093900  365060 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:09:44.094318  365060 out.go:368] Setting JSON to true
	I1212 20:09:44.095123  365060 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10336,"bootTime":1765559848,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:09:44.095197  365060 start.go:143] virtualization:  
	I1212 20:09:44.118945  365060 out.go:99] [download-only-206451] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:09:44.119268  365060 notify.go:221] Checking for updates...
	I1212 20:09:44.166533  365060 out.go:171] MINIKUBE_LOCATION=22112
	I1212 20:09:44.200049  365060 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:44.230721  365060 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:09:44.263573  365060 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:09:44.295523  365060 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 20:09:44.359848  365060 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 20:09:44.360191  365060 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:09:44.381899  365060 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:09:44.382017  365060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:44.445720  365060 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:44.436094154 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:44.445828  365060 docker.go:319] overlay module found
	I1212 20:09:44.449782  365060 out.go:99] Using the docker driver based on user configuration
	I1212 20:09:44.449821  365060 start.go:309] selected driver: docker
	I1212 20:09:44.449827  365060 start.go:927] validating driver "docker" against <nil>
	I1212 20:09:44.449926  365060 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:44.512881  365060 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:44.503292796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:44.513046  365060 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:09:44.513328  365060 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 20:09:44.513485  365060 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 20:09:44.517791  365060 out.go:171] Using Docker driver with root privileges
	I1212 20:09:44.521530  365060 cni.go:84] Creating CNI manager for ""
	I1212 20:09:44.521613  365060 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:09:44.521628  365060 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:09:44.521720  365060 start.go:353] cluster config:
	{Name:download-only-206451 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:download-only-206451 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:09:44.525535  365060 out.go:99] Starting "download-only-206451" primary control-plane node in "download-only-206451" cluster
	I1212 20:09:44.525571  365060 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:09:44.529163  365060 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:09:44.529221  365060 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:09:44.529329  365060 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:09:44.544401  365060 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 20:09:44.544551  365060 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 20:09:44.544585  365060 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 20:09:44.544595  365060 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 20:09:44.544613  365060 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 20:09:44.580410  365060 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	I1212 20:09:44.580441  365060 cache.go:65] Caching tarball of preloaded images
	I1212 20:09:44.580627  365060 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime crio
	I1212 20:09:44.584272  365060 out.go:99] Downloading Kubernetes v1.34.2 preload ...
	I1212 20:09:44.584301  365060 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1212 20:09:44.674684  365060 preload.go:295] Got checksum from GCS API "36a1245638f6169d426638fac0bd307d"
	I1212 20:09:44.674742  365060 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.2/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:36a1245638f6169d426638fac0bd307d -> /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-206451 host does not exist
	  To start a cluster, run: "minikube start -p download-only-206451"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-206451
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (6.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-527569 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-527569 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.463693821s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (6.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1212 20:09:56.807554  364853 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
I1212 20:09:56.807590  364853 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-527569
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-527569: exit status 85 (82.085077ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                       ARGS                                                                                       │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-220862 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-220862 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-220862                                                                                                                                                          │ download-only-220862 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -o=json --download-only -p download-only-206451 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=crio --driver=docker  --container-runtime=crio        │ download-only-206451 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                            │ minikube             │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ delete  │ -p download-only-206451                                                                                                                                                          │ download-only-206451 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │ 12 Dec 25 20:09 UTC │
	│ start   │ -o=json --download-only -p download-only-527569 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-527569 │ jenkins │ v1.37.0 │ 12 Dec 25 20:09 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/12 20:09:50
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1212 20:09:50.385143  365255 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:09:50.385348  365255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:50.385375  365255 out.go:374] Setting ErrFile to fd 2...
	I1212 20:09:50.385394  365255 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:09:50.385944  365255 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:09:50.386420  365255 out.go:368] Setting JSON to true
	I1212 20:09:50.387337  365255 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10343,"bootTime":1765559848,"procs":156,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:09:50.387434  365255 start.go:143] virtualization:  
	I1212 20:09:50.390673  365255 out.go:99] [download-only-527569] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:09:50.390897  365255 notify.go:221] Checking for updates...
	I1212 20:09:50.393662  365255 out.go:171] MINIKUBE_LOCATION=22112
	I1212 20:09:50.396593  365255 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:09:50.399508  365255 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:09:50.402462  365255 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:09:50.405366  365255 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1212 20:09:50.411003  365255 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1212 20:09:50.411268  365255 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:09:50.431103  365255 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:09:50.431249  365255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:50.498383  365255 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:50.489133946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:50.498493  365255 docker.go:319] overlay module found
	I1212 20:09:50.501526  365255 out.go:99] Using the docker driver based on user configuration
	I1212 20:09:50.501575  365255 start.go:309] selected driver: docker
	I1212 20:09:50.501587  365255 start.go:927] validating driver "docker" against <nil>
	I1212 20:09:50.501696  365255 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:09:50.554323  365255 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:47 SystemTime:2025-12-12 20:09:50.545481919 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:09:50.554490  365255 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1212 20:09:50.554790  365255 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1212 20:09:50.554968  365255 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1212 20:09:50.558013  365255 out.go:171] Using Docker driver with root privileges
	I1212 20:09:50.560989  365255 cni.go:84] Creating CNI manager for ""
	I1212 20:09:50.561052  365255 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1212 20:09:50.561065  365255 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1212 20:09:50.561142  365255 start.go:353] cluster config:
	{Name:download-only-527569 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-527569 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.l
ocal ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:09:50.564082  365255 out.go:99] Starting "download-only-527569" primary control-plane node in "download-only-527569" cluster
	I1212 20:09:50.564100  365255 cache.go:134] Beginning downloading kic base image for docker with crio
	I1212 20:09:50.566868  365255 out.go:99] Pulling base image v0.0.48-1765505794-22112 ...
	I1212 20:09:50.566904  365255 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:09:50.567098  365255 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local docker daemon
	I1212 20:09:50.582889  365255 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 to local cache
	I1212 20:09:50.583032  365255 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory
	I1212 20:09:50.583056  365255 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 in local cache directory, skipping pull
	I1212 20:09:50.583061  365255 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 exists in cache, skipping pull
	I1212 20:09:50.583073  365255 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 as a tarball
	I1212 20:09:50.628622  365255 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	I1212 20:09:50.628650  365255 cache.go:65] Caching tarball of preloaded images
	I1212 20:09:50.628817  365255 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime crio
	I1212 20:09:50.631877  365255 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1212 20:09:50.631909  365255 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4 from gcs api...
	I1212 20:09:50.723219  365255 preload.go:295] Got checksum from GCS API "e7da2fb676059c00535073e4a61150f1"
	I1212 20:09:50.723272  365255 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:e7da2fb676059c00535073e4a61150f1 -> /home/jenkins/minikube-integration/22112-362983/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-beta.0-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-527569 host does not exist
	  To start a cluster, run: "minikube start -p download-only-527569"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-527569
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1212 20:09:58.126041  364853 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-598936 --alsologtostderr --binary-mirror http://127.0.0.1:40449 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-598936" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-598936
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-603031
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-603031: exit status 85 (74.677896ms)

                                                
                                                
-- stdout --
	* Profile "addons-603031" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-603031"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-603031
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-603031: exit status 85 (76.819245ms)

                                                
                                                
-- stdout --
	* Profile "addons-603031" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-603031"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (164.32s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-603031 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-603031 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m44.323677536s)
--- PASS: TestAddons/Setup (164.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-603031 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-603031 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.19s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-603031 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-603031 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1ce0a663-cd7a-4247-ba0d-fcaaf2f5818e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1ce0a663-cd7a-4247-ba0d-fcaaf2f5818e] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004061755s
addons_test.go:696: (dbg) Run:  kubectl --context addons-603031 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-603031 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-603031 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-603031 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-603031
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-603031: (12.101175786s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-603031
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-603031
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-603031
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (39.37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-927255 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-927255 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (36.484157528s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-927255 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-927255 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-927255 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-927255" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-927255
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-927255: (2.148006464s)
--- PASS: TestCertOptions (39.37s)

                                                
                                    
x
+
TestCertExpiration (332.1s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-142118 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-142118 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (39.349739929s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-142118 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-142118 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (1m50.199845846s)
helpers_test.go:176: Cleaning up "cert-expiration-142118" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-142118
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-142118: (2.545264221s)
--- PASS: TestCertExpiration (332.10s)

                                                
                                    
x
+
TestForceSystemdFlag (40.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-700267 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-700267 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.134848627s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-700267 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-700267" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-700267
E1212 21:40:47.145412  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-700267: (2.836478514s)
--- PASS: TestForceSystemdFlag (40.47s)

                                                
                                    
x
+
TestForceSystemdEnv (44.27s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-459104 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-459104 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (41.507061125s)
helpers_test.go:176: Cleaning up "force-systemd-env-459104" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-459104
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-459104: (2.760132454s)
--- PASS: TestForceSystemdEnv (44.27s)

                                                
                                    
x
+
TestErrorSpam/setup (29.34s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-286667 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-286667 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-286667 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-286667 --driver=docker  --container-runtime=crio: (29.339506988s)
--- PASS: TestErrorSpam/setup (29.34s)

                                                
                                    
x
+
TestErrorSpam/start (0.84s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 start --dry-run
--- PASS: TestErrorSpam/start (0.84s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (6.08s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause: exit status 80 (1.947366975s)

                                                
                                                
-- stdout --
	* Pausing node nospam-286667 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:16:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause: exit status 80 (1.621252991s)

                                                
                                                
-- stdout --
	* Pausing node nospam-286667 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:16:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause: exit status 80 (2.511742165s)

                                                
                                                
-- stdout --
	* Pausing node nospam-286667 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:16:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.08s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.35s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause: exit status 80 (1.74759699s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-286667 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:16:43Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause: exit status 80 (1.510241001s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-286667 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:16:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause: exit status 80 (2.095287231s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-286667 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-12T20:16:47Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.35s)

                                                
                                    
x
+
TestErrorSpam/stop (1.52s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 stop: (1.313138762s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-286667 --log_dir /tmp/nospam-286667 stop
--- PASS: TestErrorSpam/stop (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (82.82s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205528 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E1212 20:17:44.064148  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:44.070541  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:44.081966  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:44.103476  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:44.144960  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:44.226495  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:44.388090  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:44.709609  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:45.351845  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:46.633650  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:49.194930  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:17:54.317378  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:18:04.558766  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-205528 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m22.819001011s)
--- PASS: TestFunctional/serial/StartWithProxy (82.82s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (30.05s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1212 20:18:16.095046  364853 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205528 --alsologtostderr -v=8
E1212 20:18:25.040078  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-205528 --alsologtostderr -v=8: (30.040947433s)
functional_test.go:678: soft start took 30.047330286s for "functional-205528" cluster.
I1212 20:18:46.136818  364853 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (30.05s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-205528 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 cache add registry.k8s.io/pause:3.1: (1.24438005s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 cache add registry.k8s.io/pause:3.3: (1.212710621s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 cache add registry.k8s.io/pause:latest: (1.185990322s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-205528 /tmp/TestFunctionalserialCacheCmdcacheadd_local3182748811/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cache add minikube-local-cache-test:functional-205528
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cache delete minikube-local-cache-test:functional-205528
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-205528
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (312.306979ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.91s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 kubectl -- --context functional-205528 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-205528 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.21s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205528 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1212 20:19:06.009787  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-205528 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.20830951s)
functional_test.go:776: restart took 33.208396765s for "functional-205528" cluster.
I1212 20:19:27.177646  364853 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (33.21s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-205528 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 logs: (1.441930284s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 logs --file /tmp/TestFunctionalserialLogsFileCmd2336197589/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 logs --file /tmp/TestFunctionalserialLogsFileCmd2336197589/001/logs.txt: (1.556844684s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.56s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.61s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-205528 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-205528
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-205528: exit status 115 (385.329165ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31213 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-205528 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.61s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 config get cpus: exit status 14 (96.362363ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 config get cpus: exit status 14 (66.078543ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (14.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-205528 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-205528 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 389948: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (14.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-205528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (210.576189ms)

                                                
                                                
-- stdout --
	* [functional-205528] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:20:05.218127  389373 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:20:05.218345  389373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:20:05.218375  389373 out.go:374] Setting ErrFile to fd 2...
	I1212 20:20:05.218397  389373 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:20:05.219608  389373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:20:05.220092  389373 out.go:368] Setting JSON to false
	I1212 20:20:05.221085  389373 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10958,"bootTime":1765559848,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:20:05.221189  389373 start.go:143] virtualization:  
	I1212 20:20:05.224560  389373 out.go:179] * [functional-205528] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:20:05.228569  389373 notify.go:221] Checking for updates...
	I1212 20:20:05.231641  389373 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:20:05.234551  389373 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:20:05.237340  389373 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:20:05.240052  389373 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:20:05.242832  389373 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:20:05.245693  389373 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:20:05.249074  389373 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:20:05.249659  389373 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:20:05.283493  389373 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:20:05.283618  389373 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:20:05.352755  389373 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 20:20:05.341974145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:20:05.352869  389373 docker.go:319] overlay module found
	I1212 20:20:05.356033  389373 out.go:179] * Using the docker driver based on existing profile
	I1212 20:20:05.358958  389373 start.go:309] selected driver: docker
	I1212 20:20:05.358994  389373 start.go:927] validating driver "docker" against &{Name:functional-205528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-205528 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:20:05.359096  389373 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:20:05.362815  389373 out.go:203] 
	W1212 20:20:05.365993  389373 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 20:20:05.368957  389373 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205528 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-205528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-205528 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (233.055711ms)

                                                
                                                
-- stdout --
	* [functional-205528] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:20:05.013389  389311 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:20:05.013584  389311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:20:05.013593  389311 out.go:374] Setting ErrFile to fd 2...
	I1212 20:20:05.013599  389311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:20:05.016224  389311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:20:05.016819  389311 out.go:368] Setting JSON to false
	I1212 20:20:05.018900  389311 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":10957,"bootTime":1765559848,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:20:05.018996  389311 start.go:143] virtualization:  
	I1212 20:20:05.022770  389311 out.go:179] * [functional-205528] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 20:20:05.026035  389311 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:20:05.026126  389311 notify.go:221] Checking for updates...
	I1212 20:20:05.031967  389311 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:20:05.034872  389311 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:20:05.037943  389311 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:20:05.041188  389311 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:20:05.044104  389311 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:20:05.047952  389311 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:20:05.048781  389311 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:20:05.077648  389311 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:20:05.077786  389311 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:20:05.142823  389311 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-12-12 20:20:05.129686889 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:20:05.142938  389311 docker.go:319] overlay module found
	I1212 20:20:05.146033  389311 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 20:20:05.148920  389311 start.go:309] selected driver: docker
	I1212 20:20:05.148960  389311 start.go:927] validating driver "docker" against &{Name:functional-205528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-205528 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:20:05.149091  389311 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:20:05.152731  389311 out.go:203] 
	W1212 20:20:05.155756  389311 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 20:20:05.158583  389311 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-205528 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-205528 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-sp5xc" [fe8801cd-b7d1-42f9-a888-149b5f5c8945] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-sp5xc" [fe8801cd-b7d1-42f9-a888-149b5f5c8945] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.006684835s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30254
functional_test.go:1680: http://192.168.49.2:30254: success! body:
Request served by hello-node-connect-7d85dfc575-sp5xc

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30254
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.64s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (19.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [cfe4b06c-3064-4f0d-b71d-344f63eaceab] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00367113s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-205528 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-205528 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-205528 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-205528 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [6ed2f059-79cf-4b60-af63-f513ee1be254] Pending
helpers_test.go:353: "sp-pod" [6ed2f059-79cf-4b60-af63-f513ee1be254] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [6ed2f059-79cf-4b60-af63-f513ee1be254] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003613703s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-205528 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-205528 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-205528 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [69531568-f61d-4f4b-8ba4-3e9623573317] Pending
helpers_test.go:353: "sp-pod" [69531568-f61d-4f4b-8ba4-3e9623573317] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003891106s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-205528 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (19.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh -n functional-205528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cp functional-205528:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2639682965/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh -n functional-205528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh -n functional-205528 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/364853/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo cat /etc/test/nested/copy/364853/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/364853.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo cat /etc/ssl/certs/364853.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/364853.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo cat /usr/share/ca-certificates/364853.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3648532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo cat /etc/ssl/certs/3648532.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3648532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo cat /usr/share/ca-certificates/3648532.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-205528 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh "sudo systemctl is-active docker": exit status 1 (358.844132ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh "sudo systemctl is-active containerd": exit status 1 (386.657059ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-205528 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-205528 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-205528 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-205528 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 387153: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-205528 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-205528 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [0048be01-2451-436e-a9fa-e96970687cd2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [0048be01-2451-436e-a9fa-e96970687cd2] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004019354s
I1212 20:19:45.250327  364853 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-205528 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.19.220 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-205528 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-205528 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-205528 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-qvq45" [3a9c7511-5a39-478b-a0a2-789d70997185] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-qvq45" [3a9c7511-5a39-478b-a0a2-789d70997185] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004423236s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "380.567367ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "55.410187ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "366.78854ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "58.413492ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdany-port3678871737/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765570798518664774" to /tmp/TestFunctionalparallelMountCmdany-port3678871737/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765570798518664774" to /tmp/TestFunctionalparallelMountCmdany-port3678871737/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765570798518664774" to /tmp/TestFunctionalparallelMountCmdany-port3678871737/001/test-1765570798518664774
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (353.392478ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:19:58.873104  364853 retry.go:31] will retry after 452.165684ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 12 20:19 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 12 20:19 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 12 20:19 test-1765570798518664774
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh cat /mount-9p/test-1765570798518664774
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-205528 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [0e30c71c-94d3-46e1-a4df-0831dd1a4317] Pending
helpers_test.go:353: "busybox-mount" [0e30c71c-94d3-46e1-a4df-0831dd1a4317] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [0e30c71c-94d3-46e1-a4df-0831dd1a4317] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [0e30c71c-94d3-46e1-a4df-0831dd1a4317] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.006507076s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-205528 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdany-port3678871737/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 service list -o json
functional_test.go:1504: Took "531.711058ms" to run "out/minikube-linux-arm64 -p functional-205528 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30122
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30122
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdspecific-port153629359/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (655.213433ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:20:07.765781  364853 retry.go:31] will retry after 394.25523ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdspecific-port153629359/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh "sudo umount -f /mount-9p": exit status 1 (384.330676ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-205528 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdspecific-port153629359/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.36s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup775782564/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup775782564/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup775782564/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T" /mount1: exit status 1 (849.91903ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:20:10.325022  364853 retry.go:31] will retry after 595.206427ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-205528 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup775782564/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup775782564/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-205528 /tmp/TestFunctionalparallelMountCmdVerifyCleanup775782564/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.69s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 version -o=json --components: (1.047081351s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205528 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
localhost/minikube-local-cache-test:functional-205528
localhost/kicbase/echo-server:functional-205528
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205528 image ls --format short --alsologtostderr:
I1212 20:20:21.906518  392109 out.go:360] Setting OutFile to fd 1 ...
I1212 20:20:21.906718  392109 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:21.906748  392109 out.go:374] Setting ErrFile to fd 2...
I1212 20:20:21.906768  392109 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:21.907040  392109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:20:21.907703  392109 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:21.907872  392109 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:21.908461  392109 cli_runner.go:164] Run: docker container inspect functional-205528 --format={{.State.Status}}
I1212 20:20:21.930029  392109 ssh_runner.go:195] Run: systemctl --version
I1212 20:20:21.930084  392109 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205528
I1212 20:20:21.949985  392109 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-205528/id_rsa Username:docker}
I1212 20:20:22.071894  392109 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205528 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ localhost/minikube-local-cache-test     │ functional-205528  │ aebfe7a7264f4 │ 3.33kB │
│ registry.k8s.io/kube-proxy              │ v1.34.2            │ 94bff1bec29fd │ 75.9MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 1611cd07b61d5 │ 3.77MB │
│ public.ecr.aws/nginx/nginx              │ alpine             │ 10afed3caf3ee │ 55.1MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 138784d87c9c5 │ 73.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.2            │ b178af3d91f80 │ 84.8MB │
│ docker.io/kicbase/echo-server           │ latest             │ ce2d2cda2d858 │ 4.79MB │
│ localhost/kicbase/echo-server           │ functional-205528  │ ce2d2cda2d858 │ 4.79MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.2            │ 1b34917560f09 │ 72.6MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.2            │ 4f982e73e768a │ 51.6MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205528 image ls --format table --alsologtostderr:
I1212 20:20:22.656542  392298 out.go:360] Setting OutFile to fd 1 ...
I1212 20:20:22.656706  392298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.656716  392298 out.go:374] Setting ErrFile to fd 2...
I1212 20:20:22.656722  392298 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.656974  392298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:20:22.657593  392298 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.657714  392298 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.658270  392298 cli_runner.go:164] Run: docker container inspect functional-205528 --format={{.State.Status}}
I1212 20:20:22.682655  392298 ssh_runner.go:195] Run: systemctl --version
I1212 20:20:22.682722  392298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205528
I1212 20:20:22.700810  392298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-205528/id_rsa Username:docker}
I1212 20:20:22.813845  392298 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205528 image ls --format json --alsologtostderr:
[{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b","localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-205528"],"size":"4789170"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b4610899694
49f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89","registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"72629077"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":
["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311
a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84","registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"84753391"},{"id":"94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786","repoDigests":["registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12","registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"75941783"},{"id":"aebfe7a7264f484932a278deff55c8ca8706c58eb061236ff5
c2e5a45d954161","repoDigests":["localhost/minikube-local-cache-test@sha256:4bb68926f62d263c137a372231c15bd3b68a0b8efb13c0331f2b2f836475a7b2"],"repoTags":["localhost/minikube-local-cache-test:functional-205528"],"size":"3330"},{"id":"10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4","repoDigests":["public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d","public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55077248"},{"id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949","repoDigests":["registry
.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe","registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"51592021"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6c
cd14aacfa65432d4cfd5789","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"73195387"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205528 image ls --format json --alsologtostderr:
I1212 20:20:22.360874  392226 out.go:360] Setting OutFile to fd 1 ...
I1212 20:20:22.361093  392226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.361121  392226 out.go:374] Setting ErrFile to fd 2...
I1212 20:20:22.361140  392226 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.361417  392226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:20:22.362088  392226 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.362259  392226 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.362884  392226 cli_runner.go:164] Run: docker container inspect functional-205528 --format={{.State.Status}}
I1212 20:20:22.390982  392226 ssh_runner.go:195] Run: systemctl --version
I1212 20:20:22.391044  392226 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205528
I1212 20:20:22.411244  392226 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-205528/id_rsa Username:docker}
I1212 20:20:22.523930  392226 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205528 image ls --format yaml --alsologtostderr:
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: 1b34917560f0916ad0d1e98debeaf98c640b68c5a38f6d87711f0e288e5d7be2
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4b3abd4d4543ac8451f97e9771aa0a29a9958e51ac02fe44900b4a224031df89
- registry.k8s.io/kube-controller-manager@sha256:5c3998664b77441c09a4604f1361b230e63f7a6f299fc02fc1ebd1a12c38e3eb
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "72629077"
- id: 94bff1bec29fd04573941f362e44a6730b151d46df215613feb3f1167703f786
repoDigests:
- registry.k8s.io/kube-proxy@sha256:20a31b16a001e3e4db71a17ba8effc4b145a3afa2086e844ab40dc5baa5b8d12
- registry.k8s.io/kube-proxy@sha256:d8b843ac8a5e861238df24a4db8c2ddced89948633400c4660464472045276f5
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "75941783"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"
- id: aebfe7a7264f484932a278deff55c8ca8706c58eb061236ff5c2e5a45d954161
repoDigests:
- localhost/minikube-local-cache-test@sha256:4bb68926f62d263c137a372231c15bd3b68a0b8efb13c0331f2b2f836475a7b2
repoTags:
- localhost/minikube-local-cache-test:functional-205528
size: "3330"
- id: b178af3d91f80925cd8bec42e1813e7d46370236a811d3380c9c10a02b245ca7
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9a94f333d6fe202d804910534ef052b2cfa650982cdcbe48e92339c8d314dd84
- registry.k8s.io/kube-apiserver@sha256:e009ef63deaf797763b5bd423d04a099a2fe414a081bf7d216b43bc9e76b9077
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "84753391"
- id: 4f982e73e768a6ccebb54f8905b83b78d56b3a014e709c0bfe77140db3543949
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3eff58b308cdc6c65cf030333090e14cc77bea4ed4ea9a92d212a0babc924ffe
- registry.k8s.io/kube-scheduler@sha256:44229946c0966b07d5c0791681d803e77258949985e49b4ab0fbdff99d2a48c6
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "51592021"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- docker.io/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-205528
size: "4789170"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 10afed3caf3eed1b711b8fa0a9600a7b488a45653a15a598a47ac570c1204cc4
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:2faa7e87b6fbce823070978247970cea2ad90b1936e84eeae1bd2680b03c168d
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55077248"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4779e7517f375a597f100524db6f7f8b5b8499a6ccd14aacfa65432d4cfd5789
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "73195387"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205528 image ls --format yaml --alsologtostderr:
I1212 20:20:22.053953  392142 out.go:360] Setting OutFile to fd 1 ...
I1212 20:20:22.054176  392142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.054207  392142 out.go:374] Setting ErrFile to fd 2...
I1212 20:20:22.054227  392142 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.054497  392142 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:20:22.055166  392142 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.055331  392142 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.055911  392142 cli_runner.go:164] Run: docker container inspect functional-205528 --format={{.State.Status}}
I1212 20:20:22.084996  392142 ssh_runner.go:195] Run: systemctl --version
I1212 20:20:22.085055  392142 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205528
I1212 20:20:22.110311  392142 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-205528/id_rsa Username:docker}
I1212 20:20:22.236646  392142 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-205528 ssh pgrep buildkitd: exit status 1 (370.467103ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr: (3.368064563s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f387e665f5e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-205528
--> 8a1d8f37712
Successfully tagged localhost/my-image:functional-205528
8a1d8f37712939c86bd70a9e02c2ee511be93062b5a99aa071b352235efb24f1
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-205528 image build -t localhost/my-image:functional-205528 testdata/build --alsologtostderr:
I1212 20:20:22.565713  392284 out.go:360] Setting OutFile to fd 1 ...
I1212 20:20:22.566404  392284 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.566421  392284 out.go:374] Setting ErrFile to fd 2...
I1212 20:20:22.566428  392284 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:20:22.566700  392284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:20:22.568782  392284 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.571699  392284 config.go:182] Loaded profile config "functional-205528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
I1212 20:20:22.572428  392284 cli_runner.go:164] Run: docker container inspect functional-205528 --format={{.State.Status}}
I1212 20:20:22.598826  392284 ssh_runner.go:195] Run: systemctl --version
I1212 20:20:22.598883  392284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-205528
I1212 20:20:22.625538  392284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33157 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-205528/id_rsa Username:docker}
I1212 20:20:22.735823  392284 build_images.go:162] Building image from path: /tmp/build.3911177821.tar
I1212 20:20:22.735891  392284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 20:20:22.744780  392284 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3911177821.tar
I1212 20:20:22.749314  392284 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3911177821.tar: stat -c "%s %y" /var/lib/minikube/build/build.3911177821.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3911177821.tar': No such file or directory
I1212 20:20:22.749348  392284 ssh_runner.go:362] scp /tmp/build.3911177821.tar --> /var/lib/minikube/build/build.3911177821.tar (3072 bytes)
I1212 20:20:22.770036  392284 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3911177821
I1212 20:20:22.778610  392284 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3911177821 -xf /var/lib/minikube/build/build.3911177821.tar
I1212 20:20:22.787998  392284 crio.go:315] Building image: /var/lib/minikube/build/build.3911177821
I1212 20:20:22.788089  392284 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-205528 /var/lib/minikube/build/build.3911177821 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1212 20:20:25.839926  392284 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-205528 /var/lib/minikube/build/build.3911177821 --cgroup-manager=cgroupfs: (3.051797994s)
I1212 20:20:25.839999  392284 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3911177821
I1212 20:20:25.847622  392284 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3911177821.tar
I1212 20:20:25.854949  392284 build_images.go:218] Built localhost/my-image:functional-205528 from /tmp/build.3911177821.tar
I1212 20:20:25.854980  392284 build_images.go:134] succeeded building to: functional-205528
I1212 20:20:25.854985  392284 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-205528
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image load --daemon kicbase/echo-server:functional-205528 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-205528 image load --daemon kicbase/echo-server:functional-205528 --alsologtostderr: (1.73447904s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image load --daemon kicbase/echo-server:functional-205528 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-205528
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image load --daemon kicbase/echo-server:functional-205528 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image save kicbase/echo-server:functional-205528 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image rm kicbase/echo-server:functional-205528 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image ls
2025/12/12 20:20:19 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-205528
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 image save --daemon kicbase/echo-server:functional-205528 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-205528
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-205528 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-205528
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-205528
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-205528
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22112-362983/.minikube/files/etc/test/nested/copy/364853/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 cache add registry.k8s.io/pause:3.1: (1.268478484s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 cache add registry.k8s.io/pause:3.3: (1.222624392s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 cache add registry.k8s.io/pause:latest: (1.152355768s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (3.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach707799671/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cache add minikube-local-cache-test:functional-261311
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cache delete minikube-local-cache-test:functional-261311
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-261311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (303.920472ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (1.96s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs2598767915/001/logs.txt
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (0.98s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 config get cpus: exit status 14 (64.875361ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 config get cpus: exit status 14 (115.2627ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (184.658747ms)

                                                
                                                
-- stdout --
	* [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:49:35.343284  422012 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:49:35.343638  422012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.343682  422012 out.go:374] Setting ErrFile to fd 2...
	I1212 20:49:35.343701  422012 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.344009  422012 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:49:35.344433  422012 out.go:368] Setting JSON to false
	I1212 20:49:35.345289  422012 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12728,"bootTime":1765559848,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:49:35.345362  422012 start.go:143] virtualization:  
	I1212 20:49:35.348531  422012 out.go:179] * [functional-261311] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1212 20:49:35.352232  422012 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:49:35.352403  422012 notify.go:221] Checking for updates...
	I1212 20:49:35.357978  422012 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:49:35.360893  422012 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:49:35.363722  422012 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:49:35.366417  422012 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:49:35.369245  422012 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:49:35.372794  422012 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:49:35.373432  422012 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:49:35.398928  422012 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:49:35.399055  422012 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.463977  422012 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.454753317 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.464085  422012 docker.go:319] overlay module found
	I1212 20:49:35.468982  422012 out.go:179] * Using the docker driver based on existing profile
	I1212 20:49:35.471758  422012 start.go:309] selected driver: docker
	I1212 20:49:35.471783  422012 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.471880  422012 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:49:35.475320  422012 out.go:203] 
	W1212 20:49:35.478185  422012 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1212 20:49:35.480974  422012 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-261311 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-261311 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-beta.0: exit status 23 (233.535663ms)

                                                
                                                
-- stdout --
	* [functional-261311] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:49:35.121076  421960 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:49:35.121207  421960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.121217  421960 out.go:374] Setting ErrFile to fd 2...
	I1212 20:49:35.121223  421960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:49:35.121611  421960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:49:35.122028  421960 out.go:368] Setting JSON to false
	I1212 20:49:35.122909  421960 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":12728,"bootTime":1765559848,"procs":161,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1212 20:49:35.122982  421960 start.go:143] virtualization:  
	I1212 20:49:35.126430  421960 out.go:179] * [functional-261311] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1212 20:49:35.129407  421960 notify.go:221] Checking for updates...
	I1212 20:49:35.129936  421960 out.go:179]   - MINIKUBE_LOCATION=22112
	I1212 20:49:35.133605  421960 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1212 20:49:35.136609  421960 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	I1212 20:49:35.139546  421960 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	I1212 20:49:35.142571  421960 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1212 20:49:35.145497  421960 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1212 20:49:35.148927  421960 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
	I1212 20:49:35.149517  421960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1212 20:49:35.185929  421960 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1212 20:49:35.186083  421960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:49:35.280000  421960 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-12-12 20:49:35.270503475 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:49:35.280114  421960 docker.go:319] overlay module found
	I1212 20:49:35.283280  421960 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1212 20:49:35.286098  421960 start.go:309] selected driver: docker
	I1212 20:49:35.286120  421960 start.go:927] validating driver "docker" against &{Name:functional-261311 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765505794-22112@sha256:ecdbfa550e7eb1f0d6522e2766f232ce114dd8c18f4d4e04bf6b41b6f7349138 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-261311 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1212 20:49:35.286231  421960 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1212 20:49:35.289782  421960 out.go:203] 
	W1212 20:49:35.292668  421960 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1212 20:49:35.295556  421960 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh -n functional-261311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cp functional-261311:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp1560345641/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh -n functional-261311 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh -n functional-261311 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (2.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/364853/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo cat /etc/test/nested/copy/364853/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.72s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/364853.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo cat /etc/ssl/certs/364853.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/364853.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo cat /usr/share/ca-certificates/364853.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3648532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo cat /etc/ssl/certs/3648532.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3648532.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo cat /usr/share/ca-certificates/3648532.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (1.72s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh "sudo systemctl is-active docker": exit status 1 (285.669449ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh "sudo systemctl is-active containerd": exit status 1 (298.036818ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-261311 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "329.398224ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "57.429336ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "331.630512ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.545272ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo996352420/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (389.724939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:49:27.972426  364853 retry.go:31] will retry after 639.413414ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo996352420/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh "sudo umount -f /mount-9p": exit status 1 (273.731893ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-261311 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo996352420/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T" /mount1: exit status 1 (641.637677ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1212 20:49:30.304715  364853 retry.go:31] will retry after 714.793379ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-261311 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-261311 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelMo1415829422/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd/VerifyCleanup (2.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-261311 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
localhost/minikube-local-cache-test:functional-261311
localhost/kicbase/echo-server:functional-261311
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-261311 image ls --format short --alsologtostderr:
I1212 20:49:48.130908  424167 out.go:360] Setting OutFile to fd 1 ...
I1212 20:49:48.131072  424167 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:48.131083  424167 out.go:374] Setting ErrFile to fd 2...
I1212 20:49:48.131089  424167 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:48.131385  424167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:49:48.132087  424167 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:48.132214  424167 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:48.132795  424167 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:49:48.150328  424167 ssh_runner.go:195] Run: systemctl --version
I1212 20:49:48.150386  424167 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:49:48.167783  424167 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
I1212 20:49:48.271212  424167 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-261311 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager │ v1.35.0-beta.0     │ 68b5f775f1876 │ 72.2MB │
│ registry.k8s.io/pause                   │ 3.1                │ 8057e0500773a │ 529kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 3d18732f8686c │ 487kB  │
│ gcr.io/k8s-minikube/busybox             │ latest             │ 71a676dd070f4 │ 1.63MB │
│ localhost/minikube-local-cache-test     │ functional-261311  │ aebfe7a7264f4 │ 3.33kB │
│ localhost/my-image                      │ functional-261311  │ aefbe274d4b1c │ 1.64MB │
│ registry.k8s.io/coredns/coredns         │ v1.13.1            │ e08f4d9d2e6ed │ 74.5MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-beta.0     │ 404c2e1286177 │ 74.1MB │
│ registry.k8s.io/pause                   │ latest             │ 8cb2091f603e7 │ 246kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ b1a8c6f707935 │ 111MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ ba04bb24b9575 │ 29MB   │
│ localhost/kicbase/echo-server           │ functional-261311  │ ce2d2cda2d858 │ 4.79MB │
│ registry.k8s.io/etcd                    │ 3.6.5-0            │ 2c5f0dedd21c2 │ 60.9MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-beta.0     │ 16378741539f1 │ 49.8MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-beta.0     │ ccd634d9bcc36 │ 85MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ d7b100cd9a77b │ 520kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-261311 image ls --format table --alsologtostderr:
I1212 20:49:52.357028  424660 out.go:360] Setting OutFile to fd 1 ...
I1212 20:49:52.357197  424660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:52.357210  424660 out.go:374] Setting ErrFile to fd 2...
I1212 20:49:52.357216  424660 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:52.357486  424660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:49:52.358115  424660 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:52.358276  424660 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:52.358837  424660 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:49:52.377169  424660 ssh_runner.go:195] Run: systemctl --version
I1212 20:49:52.377237  424660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:49:52.395384  424660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
I1212 20:49:52.501209  424660 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-261311 image ls --format json --alsologtostderr:
[{"id":"404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904","repoDigests":["registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478","registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"74106775"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d","registry.k8s.io/kube-controller-manager@sha256:392e
6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"72170325"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"519884"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"f5d5cb327ea419b911351f286e42f9760d26e106a03630bac33f8fb04ec9db2c","repoDigests":["docker.io/library/1d79eaa08c5b624f45f85d1375cbe27c4b4aaf21515e7eebd4de2dbe2ca1e068-tmp@sha256:bda6c3c7ad9f537ab5b646e89389e267874b59515eede3322ce5dcac2e49031c"],"repoTags":[],"size":"1638178"},{"
id":"2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"60857170"},{"id":"16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6","registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"49822549"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9
b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"111333938"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-261311"],"size":"4788229"},{"id":"aefbe274d4b1ce007522b10b3964fb5b8a82c0c887b52030c7367389637d8cd4","repoD
igests":["localhost/my-image@sha256:144868a7c41398e94f9e6a087fd9ea85b3292530f79d00581c4d85b126d6d2d9"],"repoTags":["localhost/my-image:functional-261311"],"size":"1640791"},{"id":"e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6","registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"74491780"},{"id":"ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4","repoDigests":["registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58","registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"84949999"},{"id":"aebfe7a7264f484932a278deff55c8ca8706c58eb061236ff5c2e5a45d954161","repoDigests":["localhos
t/minikube-local-cache-test@sha256:4bb68926f62d263c137a372231c15bd3b68a0b8efb13c0331f2b2f836475a7b2"],"repoTags":["localhost/minikube-local-cache-test:functional-261311"],"size":"3330"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-261311 image ls --format json --alsologtostderr:
I1212 20:49:52.126014  424625 out.go:360] Setting OutFile to fd 1 ...
I1212 20:49:52.126172  424625 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:52.126196  424625 out.go:374] Setting ErrFile to fd 2...
I1212 20:49:52.126237  424625 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:52.126577  424625 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:49:52.127379  424625 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:52.127550  424625 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:52.128108  424625 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:49:52.146072  424625 ssh_runner.go:195] Run: systemctl --version
I1212 20:49:52.146138  424625 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:49:52.163573  424625 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
I1212 20:49:52.267049  424625 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-261311 image ls --format yaml --alsologtostderr:
- id: aebfe7a7264f484932a278deff55c8ca8706c58eb061236ff5c2e5a45d954161
repoDigests:
- localhost/minikube-local-cache-test@sha256:4bb68926f62d263c137a372231c15bd3b68a0b8efb13c0331f2b2f836475a7b2
repoTags:
- localhost/minikube-local-cache-test:functional-261311
size: "3330"
- id: e08f4d9d2e6ede8185064c13b41f8eeee95b609c0ca93b6fe7509fe527c907cf
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
- registry.k8s.io/coredns/coredns@sha256:cbd225373d1800b8d9aa2cac02d5be4172ad301cf7a1ffb509ddf8ca1fe06d74
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "74491780"
- id: 404c2e12861777b763b8feaa316d36680fc68ad308a8d2f6e55f1bb981cdd904
repoDigests:
- registry.k8s.io/kube-proxy@sha256:30981692e36c0d807a6f24510245a90c663cae725fc9442d27fe99227a9f8478
- registry.k8s.io/kube-proxy@sha256:4211d807a4c1447dcbb48f737bf3e21495b00401840b07e942938f3bbbba8a2a
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "74106775"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e9c466420bcaeede00f46ecfa0ca8cd854c549f2f13330e2723173d88f2de70f
repoTags:
- registry.k8s.io/pause:3.10.1
size: "519884"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-261311
size: "4788229"
- id: 2c5f0dedd21c25ec3a6709934d22152d53ec50fe57b72d29e4450655e3d14d42
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:0f87957e19b97d01b2c70813ee5c4949f8674deac4a65f7167c4cd85f7f2941e
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "60857170"
- id: ccd634d9bcc36ac6235e9c86676cd3a02c06afc3788a25f1bbf39ca7d44585f4
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:7ad30cb2cfe0830fc85171b4f33377538efa3663a40079642e144146d0246e58
- registry.k8s.io/kube-apiserver@sha256:b5d19906f135bbf9c424f72b42b0a44feea10296bf30909ab98d18d1c8cdb6d1
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "84949999"
- id: 68b5f775f18769fcb77bd8474c80bda2050163b6c66f4551f352b7381b8ca5be
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:1b5e92ec46ad9a06398ca52322aca686c29e2ce3e9865cc4938e2f289f82354d
- registry.k8s.io/kube-controller-manager@sha256:392e6633e69fe7534571972b6f8c3e21c6e3d3e558b562b8d795de27323add79
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "72170325"
- id: 16378741539f1be9c6e347d127537d379a6592587b09b4eb47964cb5c43a409b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:417c79fea8b6329200ba37887b32ecc2f0f8657eb83a9aa660021c17fc083db6
- registry.k8s.io/kube-scheduler@sha256:e47f5a9fdfb2268ad81d24c83ad2429e9753c7e4115d461ef4b23802dfa1d34b
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "49822549"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:2bdc3188f2ddc8e54841f69ef900a8dde1280057c97500f966a7ef31364021f1
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "111333938"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-261311 image ls --format yaml --alsologtostderr:
I1212 20:49:48.360063  424205 out.go:360] Setting OutFile to fd 1 ...
I1212 20:49:48.360211  424205 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:48.360224  424205 out.go:374] Setting ErrFile to fd 2...
I1212 20:49:48.360231  424205 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:48.360546  424205 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:49:48.361199  424205 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:48.361399  424205 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:48.361983  424205 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:49:48.381179  424205 ssh_runner.go:195] Run: systemctl --version
I1212 20:49:48.381238  424205 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:49:48.398651  424205 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
I1212 20:49:48.503406  424205 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-261311 ssh pgrep buildkitd: exit status 1 (269.105584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image build -t localhost/my-image:functional-261311 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-261311 image build -t localhost/my-image:functional-261311 testdata/build --alsologtostderr: (3.029455548s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-arm64 -p functional-261311 image build -t localhost/my-image:functional-261311 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> f5d5cb327ea
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-261311
--> aefbe274d4b
Successfully tagged localhost/my-image:functional-261311
aefbe274d4b1ce007522b10b3964fb5b8a82c0c887b52030c7367389637d8cd4
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-261311 image build -t localhost/my-image:functional-261311 testdata/build --alsologtostderr:
I1212 20:49:48.854885  424309 out.go:360] Setting OutFile to fd 1 ...
I1212 20:49:48.856979  424309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:48.857002  424309 out.go:374] Setting ErrFile to fd 2...
I1212 20:49:48.857009  424309 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1212 20:49:48.857409  424309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
I1212 20:49:48.858455  424309 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:48.859755  424309 config.go:182] Loaded profile config "functional-261311": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-beta.0
I1212 20:49:48.860318  424309 cli_runner.go:164] Run: docker container inspect functional-261311 --format={{.State.Status}}
I1212 20:49:48.877650  424309 ssh_runner.go:195] Run: systemctl --version
I1212 20:49:48.877712  424309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-261311
I1212 20:49:48.894682  424309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33162 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/functional-261311/id_rsa Username:docker}
I1212 20:49:48.998961  424309 build_images.go:162] Building image from path: /tmp/build.1835489208.tar
I1212 20:49:48.999059  424309 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1212 20:49:49.009045  424309 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1835489208.tar
I1212 20:49:49.013339  424309 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1835489208.tar: stat -c "%s %y" /var/lib/minikube/build/build.1835489208.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1835489208.tar': No such file or directory
I1212 20:49:49.013384  424309 ssh_runner.go:362] scp /tmp/build.1835489208.tar --> /var/lib/minikube/build/build.1835489208.tar (3072 bytes)
I1212 20:49:49.031396  424309 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1835489208
I1212 20:49:49.039447  424309 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1835489208 -xf /var/lib/minikube/build/build.1835489208.tar
I1212 20:49:49.047487  424309 crio.go:315] Building image: /var/lib/minikube/build/build.1835489208
I1212 20:49:49.047583  424309 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-261311 /var/lib/minikube/build/build.1835489208 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1212 20:49:51.810300  424309 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-261311 /var/lib/minikube/build/build.1835489208 --cgroup-manager=cgroupfs: (2.762682879s)
I1212 20:49:51.810368  424309 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1835489208
I1212 20:49:51.817927  424309 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1835489208.tar
I1212 20:49:51.827130  424309 build_images.go:218] Built localhost/my-image:functional-261311 from /tmp/build.1835489208.tar
I1212 20:49:51.827211  424309 build_images.go:134] succeeded building to: functional-261311
I1212 20:49:51.827225  424309 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-261311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image load --daemon kicbase/echo-server:functional-261311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (1.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image load --daemon kicbase/echo-server:functional-261311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-261311
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image load --daemon kicbase/echo-server:functional-261311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image save kicbase/echo-server:functional-261311 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image rm kicbase/echo-server:functional-261311 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-261311
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 image save --daemon kicbase/echo-server:functional-261311 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-261311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-261311 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-261311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-261311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-261311
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (197.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1212 20:52:35.806350  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:35.812713  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:35.824101  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:35.845514  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:35.887040  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:35.968519  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:36.129991  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:36.451889  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:37.094045  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:38.375370  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:40.936713  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:44.061626  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:46.060588  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:52:56.302635  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:53:16.784859  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:53:57.746194  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 20:54:36.832598  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (3m16.951108991s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (197.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 kubectl -- rollout status deployment/busybox: (5.239936876s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-hltw8 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-kc6ms -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-tczdt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-hltw8 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-kc6ms -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-tczdt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-hltw8 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-kc6ms -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-tczdt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-hltw8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-hltw8 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-kc6ms -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-kc6ms -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-tczdt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 kubectl -- exec busybox-7b57f96db7-tczdt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (59.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 node add --alsologtostderr -v 5
E1212 20:55:19.667820  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 node add --alsologtostderr -v 5: (58.006683212s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5: (1.105662044s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (59.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-008703 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.09231096s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 status --output json --alsologtostderr -v 5: (1.078876568s)
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp testdata/cp-test.txt ha-008703:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703_ha-008703-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test_ha-008703_ha-008703-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703_ha-008703-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test_ha-008703_ha-008703-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703_ha-008703-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test_ha-008703_ha-008703-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp testdata/cp-test.txt ha-008703-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m02:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m02_ha-008703.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test_ha-008703-m02_ha-008703.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m02:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m02_ha-008703-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test_ha-008703-m02_ha-008703-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m02:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703-m02_ha-008703-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test_ha-008703-m02_ha-008703-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp testdata/cp-test.txt ha-008703-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m03_ha-008703.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m03_ha-008703-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m03:/home/docker/cp-test.txt ha-008703-m04:/home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test_ha-008703-m03_ha-008703-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp testdata/cp-test.txt ha-008703-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile178926978/001/cp-test_ha-008703-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703:/home/docker/cp-test_ha-008703-m04_ha-008703.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703 "sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m02:/home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m02 "sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 cp ha-008703-m04:/home/docker/cp-test.txt ha-008703-m03:/home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 ssh -n ha-008703-m03 "sudo cat /home/docker/cp-test_ha-008703-m04_ha-008703-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 node stop m02 --alsologtostderr -v 5: (12.142215834s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5: exit status 7 (805.607443ms)

                                                
                                                
-- stdout --
	ha-008703
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-008703-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-008703-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-008703-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 20:56:40.657937  440569 out.go:360] Setting OutFile to fd 1 ...
	I1212 20:56:40.658053  440569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:56:40.658064  440569 out.go:374] Setting ErrFile to fd 2...
	I1212 20:56:40.658070  440569 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 20:56:40.658337  440569 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 20:56:40.658537  440569 out.go:368] Setting JSON to false
	I1212 20:56:40.658576  440569 mustload.go:66] Loading cluster: ha-008703
	I1212 20:56:40.658646  440569 notify.go:221] Checking for updates...
	I1212 20:56:40.659680  440569 config.go:182] Loaded profile config "ha-008703": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 20:56:40.659711  440569 status.go:174] checking status of ha-008703 ...
	I1212 20:56:40.660235  440569 cli_runner.go:164] Run: docker container inspect ha-008703 --format={{.State.Status}}
	I1212 20:56:40.683017  440569 status.go:371] ha-008703 host status = "Running" (err=<nil>)
	I1212 20:56:40.683041  440569 host.go:66] Checking if "ha-008703" exists ...
	I1212 20:56:40.683337  440569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703
	I1212 20:56:40.710553  440569 host.go:66] Checking if "ha-008703" exists ...
	I1212 20:56:40.710873  440569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:56:40.710925  440569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703
	I1212 20:56:40.738216  440569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33167 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703/id_rsa Username:docker}
	I1212 20:56:40.850340  440569 ssh_runner.go:195] Run: systemctl --version
	I1212 20:56:40.859638  440569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:56:40.873384  440569 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 20:56:40.944633  440569 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-12-12 20:56:40.932143681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 20:56:40.945223  440569 kubeconfig.go:125] found "ha-008703" server: "https://192.168.49.254:8443"
	I1212 20:56:40.945259  440569 api_server.go:166] Checking apiserver status ...
	I1212 20:56:40.945306  440569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:56:40.958591  440569 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1265/cgroup
	I1212 20:56:40.967741  440569 api_server.go:182] apiserver freezer: "8:freezer:/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio/crio-367e03e67550add8777afe5c82f3311a5c0095bddff8c4123a12b611d8c7c76c"
	I1212 20:56:40.967819  440569 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2ec03df03a307c836ca3bca8a2fe340d74a3066946f8731cebeff2de74c5e93a/crio/crio-367e03e67550add8777afe5c82f3311a5c0095bddff8c4123a12b611d8c7c76c/freezer.state
	I1212 20:56:40.979386  440569 api_server.go:204] freezer state: "THAWED"
	I1212 20:56:40.979412  440569 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 20:56:40.987651  440569 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 20:56:40.987683  440569 status.go:463] ha-008703 apiserver status = Running (err=<nil>)
	I1212 20:56:40.987695  440569 status.go:176] ha-008703 status: &{Name:ha-008703 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:56:40.987713  440569 status.go:174] checking status of ha-008703-m02 ...
	I1212 20:56:40.988031  440569 cli_runner.go:164] Run: docker container inspect ha-008703-m02 --format={{.State.Status}}
	I1212 20:56:41.008036  440569 status.go:371] ha-008703-m02 host status = "Stopped" (err=<nil>)
	I1212 20:56:41.008057  440569 status.go:384] host is not running, skipping remaining checks
	I1212 20:56:41.008075  440569 status.go:176] ha-008703-m02 status: &{Name:ha-008703-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:56:41.008097  440569 status.go:174] checking status of ha-008703-m03 ...
	I1212 20:56:41.008534  440569 cli_runner.go:164] Run: docker container inspect ha-008703-m03 --format={{.State.Status}}
	I1212 20:56:41.029109  440569 status.go:371] ha-008703-m03 host status = "Running" (err=<nil>)
	I1212 20:56:41.029137  440569 host.go:66] Checking if "ha-008703-m03" exists ...
	I1212 20:56:41.029442  440569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m03
	I1212 20:56:41.048435  440569 host.go:66] Checking if "ha-008703-m03" exists ...
	I1212 20:56:41.048757  440569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:56:41.048807  440569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m03
	I1212 20:56:41.067116  440569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33177 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m03/id_rsa Username:docker}
	I1212 20:56:41.170400  440569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:56:41.183475  440569 kubeconfig.go:125] found "ha-008703" server: "https://192.168.49.254:8443"
	I1212 20:56:41.183514  440569 api_server.go:166] Checking apiserver status ...
	I1212 20:56:41.183557  440569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 20:56:41.195116  440569 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1206/cgroup
	I1212 20:56:41.206287  440569 api_server.go:182] apiserver freezer: "8:freezer:/docker/e0f5861d147a796999e5440d984489e9847e9615ee1312882f46df17fd3f5422/crio/crio-f4b6110d64670d7875daa02d5ec8ae99866fe3d3760f96346b7a94c0ce0b8def"
	I1212 20:56:41.206376  440569 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e0f5861d147a796999e5440d984489e9847e9615ee1312882f46df17fd3f5422/crio/crio-f4b6110d64670d7875daa02d5ec8ae99866fe3d3760f96346b7a94c0ce0b8def/freezer.state
	I1212 20:56:41.214446  440569 api_server.go:204] freezer state: "THAWED"
	I1212 20:56:41.214478  440569 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1212 20:56:41.222870  440569 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1212 20:56:41.222898  440569 status.go:463] ha-008703-m03 apiserver status = Running (err=<nil>)
	I1212 20:56:41.222907  440569 status.go:176] ha-008703-m03 status: &{Name:ha-008703-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 20:56:41.222923  440569 status.go:174] checking status of ha-008703-m04 ...
	I1212 20:56:41.223241  440569 cli_runner.go:164] Run: docker container inspect ha-008703-m04 --format={{.State.Status}}
	I1212 20:56:41.245351  440569 status.go:371] ha-008703-m04 host status = "Running" (err=<nil>)
	I1212 20:56:41.245373  440569 host.go:66] Checking if "ha-008703-m04" exists ...
	I1212 20:56:41.245699  440569 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-008703-m04
	I1212 20:56:41.264563  440569 host.go:66] Checking if "ha-008703-m04" exists ...
	I1212 20:56:41.264890  440569 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 20:56:41.264943  440569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-008703-m04
	I1212 20:56:41.283780  440569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33182 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/ha-008703-m04/id_rsa Username:docker}
	I1212 20:56:41.389797  440569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 20:56:41.405389  440569 status.go:176] ha-008703-m04 status: &{Name:ha-008703-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (21.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 node start m02 --alsologtostderr -v 5: (19.75478996s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-008703 status --alsologtostderr -v 5: (1.293329071s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (21.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.28247244s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                    
x
+
TestJSONOutput/start/Command (53.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-759631 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1212 21:09:36.832489  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-759631 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (53.095168454s)
--- PASS: TestJSONOutput/start/Command (53.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.88s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-759631 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-759631 --output=json --user=testUser: (5.876981593s)
--- PASS: TestJSONOutput/stop/Command (5.88s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-825937 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-825937 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (95.168919ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f22ce6cf-f635-453a-876b-35726b04d7b6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-825937] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8cb0c9d-e117-41b9-ae87-fcb69a9093f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"7ce07942-e54e-4ae4-8a85-3a1e7108b326","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"79a583b9-3486-4fbb-b427-bfcd5a8a76c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig"}}
	{"specversion":"1.0","id":"34804aec-1268-42a9-a499-1f5057a79223","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube"}}
	{"specversion":"1.0","id":"f648a8b6-81b1-47c1-ac17-3bb816c5dda3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"21c9ed45-cbd9-4986-bc1a-0af5bc416f52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ebd144ed-ff7a-4c52-8608-d78422c731cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-825937" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-825937
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.62s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-387102 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-387102 --network=: (36.372035725s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-387102" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-387102
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-387102: (2.221024341s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.62s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.65s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-411378 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-411378 --network=bridge: (34.480451438s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-411378" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-411378
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-411378: (2.149961329s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.65s)

                                                
                                    
x
+
TestKicExistingNetwork (35.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1212 21:11:32.574485  364853 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1212 21:11:32.594884  364853 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1212 21:11:32.594956  364853 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1212 21:11:32.594974  364853 cli_runner.go:164] Run: docker network inspect existing-network
W1212 21:11:32.610714  364853 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1212 21:11:32.610745  364853 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1212 21:11:32.610763  364853 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1212 21:11:32.610880  364853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1212 21:11:32.629792  364853 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-ff7ed303f4da IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:72:12:49:ad:2d:4b} reservation:<nil>}
I1212 21:11:32.630242  364853 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4004f0b460}
I1212 21:11:32.630265  364853 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1212 21:11:32.630334  364853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1212 21:11:32.686180  364853 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-207494 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-207494 --network=existing-network: (33.145735149s)
helpers_test.go:176: Cleaning up "existing-network-207494" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-207494
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-207494: (2.167648819s)
I1212 21:12:08.016506  364853 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.46s)

                                                
                                    
x
+
TestKicCustomSubnet (36.72s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-997344 --subnet=192.168.60.0/24
E1212 21:12:35.804753  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-997344 --subnet=192.168.60.0/24: (34.43179599s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-997344 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-997344" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-997344
E1212 21:12:44.061630  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-997344: (2.253458532s)
--- PASS: TestKicCustomSubnet (36.72s)

                                                
                                    
x
+
TestKicStaticIP (38.6s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-016104 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-016104 --static-ip=192.168.200.200: (36.205412189s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-016104 ip
helpers_test.go:176: Cleaning up "static-ip-016104" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-016104
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-016104: (2.231763364s)
--- PASS: TestKicStaticIP (38.60s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-362065 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-362065 --driver=docker  --container-runtime=crio: (30.420513878s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-364715 --driver=docker  --container-runtime=crio
E1212 21:14:19.908625  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-364715 --driver=docker  --container-runtime=crio: (35.533046978s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-362065
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-364715
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:176: Cleaning up "second-364715" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p second-364715
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p second-364715: (2.146584141s)
helpers_test.go:176: Cleaning up "first-362065" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p first-362065
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p first-362065: (2.077129358s)
--- PASS: TestMinikubeProfile (71.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-288273 --memory=3072 --mount-string /tmp/TestMountStartserial854689737/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1212 21:14:36.832799  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-288273 --memory=3072 --mount-string /tmp/TestMountStartserial854689737/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.173638649s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-288273 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-289963 --memory=3072 --mount-string /tmp/TestMountStartserial854689737/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-289963 --memory=3072 --mount-string /tmp/TestMountStartserial854689737/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.67639444s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289963 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-288273 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-288273 --alsologtostderr -v=5: (1.724293602s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289963 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-289963
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-289963: (1.294539337s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-289963
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-289963: (7.224394035s)
--- PASS: TestMountStart/serial/RestartStopped (8.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-289963 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (136.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309253 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309253 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (2m16.229561662s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (136.80s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-309253 -- rollout status deployment/busybox: (3.637235543s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-lxf7p -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-r54l5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-lxf7p -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-r54l5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-lxf7p -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-r54l5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.49s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-lxf7p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-lxf7p -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-r54l5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-309253 -- exec busybox-7b57f96db7-r54l5 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-309253 -v=5 --alsologtostderr
E1212 21:17:35.804809  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:17:44.061402  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-309253 -v=5 --alsologtostderr: (58.018561605s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-309253 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp testdata/cp-test.txt multinode-309253:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2906475266/001/cp-test_multinode-309253.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253:/home/docker/cp-test.txt multinode-309253-m02:/home/docker/cp-test_multinode-309253_multinode-309253-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m02 "sudo cat /home/docker/cp-test_multinode-309253_multinode-309253-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253:/home/docker/cp-test.txt multinode-309253-m03:/home/docker/cp-test_multinode-309253_multinode-309253-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m03 "sudo cat /home/docker/cp-test_multinode-309253_multinode-309253-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp testdata/cp-test.txt multinode-309253-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2906475266/001/cp-test_multinode-309253-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253-m02:/home/docker/cp-test.txt multinode-309253:/home/docker/cp-test_multinode-309253-m02_multinode-309253.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253 "sudo cat /home/docker/cp-test_multinode-309253-m02_multinode-309253.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253-m02:/home/docker/cp-test.txt multinode-309253-m03:/home/docker/cp-test_multinode-309253-m02_multinode-309253-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m03 "sudo cat /home/docker/cp-test_multinode-309253-m02_multinode-309253-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp testdata/cp-test.txt multinode-309253-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2906475266/001/cp-test_multinode-309253-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253-m03:/home/docker/cp-test.txt multinode-309253:/home/docker/cp-test_multinode-309253-m03_multinode-309253.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253 "sudo cat /home/docker/cp-test_multinode-309253-m03_multinode-309253.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 cp multinode-309253-m03:/home/docker/cp-test.txt multinode-309253-m02:/home/docker/cp-test_multinode-309253-m03_multinode-309253-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 ssh -n multinode-309253-m02 "sudo cat /home/docker/cp-test_multinode-309253-m03_multinode-309253-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.79s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-309253 node stop m03: (1.350797373s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309253 status: exit status 7 (550.405951ms)

                                                
                                                
-- stdout --
	multinode-309253
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-309253-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-309253-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr: exit status 7 (538.09263ms)

                                                
                                                
-- stdout --
	multinode-309253
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-309253-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-309253-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:18:43.858482  504346 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:18:43.858604  504346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:18:43.858647  504346 out.go:374] Setting ErrFile to fd 2...
	I1212 21:18:43.858652  504346 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:18:43.858908  504346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:18:43.859083  504346 out.go:368] Setting JSON to false
	I1212 21:18:43.859111  504346 mustload.go:66] Loading cluster: multinode-309253
	I1212 21:18:43.859302  504346 notify.go:221] Checking for updates...
	I1212 21:18:43.859492  504346 config.go:182] Loaded profile config "multinode-309253": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:18:43.859516  504346 status.go:174] checking status of multinode-309253 ...
	I1212 21:18:43.860036  504346 cli_runner.go:164] Run: docker container inspect multinode-309253 --format={{.State.Status}}
	I1212 21:18:43.880411  504346 status.go:371] multinode-309253 host status = "Running" (err=<nil>)
	I1212 21:18:43.880433  504346 host.go:66] Checking if "multinode-309253" exists ...
	I1212 21:18:43.880738  504346 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-309253
	I1212 21:18:43.902085  504346 host.go:66] Checking if "multinode-309253" exists ...
	I1212 21:18:43.902376  504346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:18:43.902430  504346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-309253
	I1212 21:18:43.926520  504346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33282 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/multinode-309253/id_rsa Username:docker}
	I1212 21:18:44.030109  504346 ssh_runner.go:195] Run: systemctl --version
	I1212 21:18:44.036692  504346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:18:44.049876  504346 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1212 21:18:44.111930  504346 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-12-12 21:18:44.101432111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1212 21:18:44.112604  504346 kubeconfig.go:125] found "multinode-309253" server: "https://192.168.67.2:8443"
	I1212 21:18:44.112637  504346 api_server.go:166] Checking apiserver status ...
	I1212 21:18:44.112681  504346 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1212 21:18:44.124457  504346 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1241/cgroup
	I1212 21:18:44.132752  504346 api_server.go:182] apiserver freezer: "8:freezer:/docker/61c55c5c902469f16444f556625beb92698f3bd2aae8a34dcaf4f548764a52a1/crio/crio-c595cc47a41f62f6ea76eaa8a9353698d9fd77ffd60ca7b687b1398d856c6b36"
	I1212 21:18:44.132827  504346 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/61c55c5c902469f16444f556625beb92698f3bd2aae8a34dcaf4f548764a52a1/crio/crio-c595cc47a41f62f6ea76eaa8a9353698d9fd77ffd60ca7b687b1398d856c6b36/freezer.state
	I1212 21:18:44.140317  504346 api_server.go:204] freezer state: "THAWED"
	I1212 21:18:44.140346  504346 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1212 21:18:44.148648  504346 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1212 21:18:44.148678  504346 status.go:463] multinode-309253 apiserver status = Running (err=<nil>)
	I1212 21:18:44.148689  504346 status.go:176] multinode-309253 status: &{Name:multinode-309253 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:18:44.148717  504346 status.go:174] checking status of multinode-309253-m02 ...
	I1212 21:18:44.149046  504346 cli_runner.go:164] Run: docker container inspect multinode-309253-m02 --format={{.State.Status}}
	I1212 21:18:44.165263  504346 status.go:371] multinode-309253-m02 host status = "Running" (err=<nil>)
	I1212 21:18:44.165285  504346 host.go:66] Checking if "multinode-309253-m02" exists ...
	I1212 21:18:44.165626  504346 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-309253-m02
	I1212 21:18:44.182488  504346 host.go:66] Checking if "multinode-309253-m02" exists ...
	I1212 21:18:44.182817  504346 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1212 21:18:44.182867  504346 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-309253-m02
	I1212 21:18:44.200039  504346 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33287 SSHKeyPath:/home/jenkins/minikube-integration/22112-362983/.minikube/machines/multinode-309253-m02/id_rsa Username:docker}
	I1212 21:18:44.309893  504346 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1212 21:18:44.323058  504346 status.go:176] multinode-309253-m02 status: &{Name:multinode-309253-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:18:44.323091  504346 status.go:174] checking status of multinode-309253-m03 ...
	I1212 21:18:44.323428  504346 cli_runner.go:164] Run: docker container inspect multinode-309253-m03 --format={{.State.Status}}
	I1212 21:18:44.340299  504346 status.go:371] multinode-309253-m03 host status = "Stopped" (err=<nil>)
	I1212 21:18:44.340331  504346 status.go:384] host is not running, skipping remaining checks
	I1212 21:18:44.340338  504346 status.go:176] multinode-309253-m03 status: &{Name:multinode-309253-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-309253 node start m03 -v=5 --alsologtostderr: (7.36209077s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-309253
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-309253
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-309253: (25.133951546s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309253 --wait=true -v=5 --alsologtostderr
E1212 21:19:36.832938  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309253 --wait=true -v=5 --alsologtostderr: (51.834024057s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-309253
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.10s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-309253 node delete m03: (4.985513223s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-309253 stop: (23.927928811s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309253 status: exit status 7 (101.808126ms)

                                                
                                                
-- stdout --
	multinode-309253
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-309253-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr: exit status 7 (105.031817ms)

                                                
                                                
-- stdout --
	multinode-309253
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-309253-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1212 21:20:39.434264  512222 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:20:39.434435  512222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:20:39.434466  512222 out.go:374] Setting ErrFile to fd 2...
	I1212 21:20:39.434486  512222 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:20:39.434754  512222 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:20:39.434976  512222 out.go:368] Setting JSON to false
	I1212 21:20:39.435029  512222 mustload.go:66] Loading cluster: multinode-309253
	I1212 21:20:39.435507  512222 config.go:182] Loaded profile config "multinode-309253": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:20:39.435570  512222 status.go:174] checking status of multinode-309253 ...
	I1212 21:20:39.436116  512222 cli_runner.go:164] Run: docker container inspect multinode-309253 --format={{.State.Status}}
	I1212 21:20:39.435082  512222 notify.go:221] Checking for updates...
	I1212 21:20:39.459343  512222 status.go:371] multinode-309253 host status = "Stopped" (err=<nil>)
	I1212 21:20:39.459365  512222 status.go:384] host is not running, skipping remaining checks
	I1212 21:20:39.459372  512222 status.go:176] multinode-309253 status: &{Name:multinode-309253 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1212 21:20:39.459399  512222 status.go:174] checking status of multinode-309253-m02 ...
	I1212 21:20:39.459714  512222 cli_runner.go:164] Run: docker container inspect multinode-309253-m02 --format={{.State.Status}}
	I1212 21:20:39.489215  512222 status.go:371] multinode-309253-m02 host status = "Stopped" (err=<nil>)
	I1212 21:20:39.489242  512222 status.go:384] host is not running, skipping remaining checks
	I1212 21:20:39.489250  512222 status.go:176] multinode-309253-m02 status: &{Name:multinode-309253-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (55.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309253 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309253 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (54.727711173s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-309253 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (55.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-309253
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309253-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-309253-m02 --driver=docker  --container-runtime=crio: exit status 14 (107.908656ms)

                                                
                                                
-- stdout --
	* [multinode-309253-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-309253-m02' is duplicated with machine name 'multinode-309253-m02' in profile 'multinode-309253'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-309253-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-309253-m03 --driver=docker  --container-runtime=crio: (32.548076733s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-309253
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-309253: exit status 80 (344.013731ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-309253 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-309253-m03 already exists in multinode-309253-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-309253-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-309253-m03: (2.119669384s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.17s)

                                                
                                    
x
+
TestPreload (100.11s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-336229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
E1212 21:22:35.805457  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:22:44.061537  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-336229 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (1m0.835925921s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-336229 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-arm64 -p test-preload-336229 image pull gcr.io/k8s-minikube/busybox: (2.21234272s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-336229
preload_test.go:55: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-336229: (5.963122205s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-336229 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-336229 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (28.372003279s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-336229 image list
helpers_test.go:176: Cleaning up "test-preload-336229" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-336229
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-336229: (2.480181736s)
--- PASS: TestPreload (100.11s)

                                                
                                    
x
+
TestScheduledStopUnix (108.88s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-619406 --memory=3072 --driver=docker  --container-runtime=crio
E1212 21:24:07.143101  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-619406 --memory=3072 --driver=docker  --container-runtime=crio: (32.612809698s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-619406 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 21:24:27.314593  526363 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:24:27.314704  526363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:24:27.314713  526363 out.go:374] Setting ErrFile to fd 2...
	I1212 21:24:27.314719  526363 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:24:27.315057  526363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:24:27.315358  526363 out.go:368] Setting JSON to false
	I1212 21:24:27.315485  526363 mustload.go:66] Loading cluster: scheduled-stop-619406
	I1212 21:24:27.316321  526363 config.go:182] Loaded profile config "scheduled-stop-619406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:24:27.316483  526363 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/config.json ...
	I1212 21:24:27.316737  526363 mustload.go:66] Loading cluster: scheduled-stop-619406
	I1212 21:24:27.316912  526363 config.go:182] Loaded profile config "scheduled-stop-619406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-619406 -n scheduled-stop-619406
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-619406 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 21:24:27.777517  526454 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:24:27.777689  526454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:24:27.777703  526454 out.go:374] Setting ErrFile to fd 2...
	I1212 21:24:27.777710  526454 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:24:27.778062  526454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:24:27.778369  526454 out.go:368] Setting JSON to false
	I1212 21:24:27.778591  526454 daemonize_unix.go:73] killing process 526380 as it is an old scheduled stop
	I1212 21:24:27.782537  526454 mustload.go:66] Loading cluster: scheduled-stop-619406
	I1212 21:24:27.783085  526454 config.go:182] Loaded profile config "scheduled-stop-619406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:24:27.783169  526454 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/config.json ...
	I1212 21:24:27.783382  526454 mustload.go:66] Loading cluster: scheduled-stop-619406
	I1212 21:24:27.783512  526454 config.go:182] Loaded profile config "scheduled-stop-619406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1212 21:24:27.789647  364853 retry.go:31] will retry after 81.832µs: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.790295  364853 retry.go:31] will retry after 153.073µs: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.791532  364853 retry.go:31] will retry after 306.961µs: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.796473  364853 retry.go:31] will retry after 304.966µs: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.797598  364853 retry.go:31] will retry after 654.081µs: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.798718  364853 retry.go:31] will retry after 989.474µs: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.799832  364853 retry.go:31] will retry after 957.962µs: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.800947  364853 retry.go:31] will retry after 1.191068ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.803141  364853 retry.go:31] will retry after 2.230548ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.806338  364853 retry.go:31] will retry after 2.417183ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.809531  364853 retry.go:31] will retry after 8.441533ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.818820  364853 retry.go:31] will retry after 8.01539ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.827100  364853 retry.go:31] will retry after 10.706617ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.838325  364853 retry.go:31] will retry after 27.825141ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
I1212 21:24:27.866551  364853 retry.go:31] will retry after 39.053625ms: open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-619406 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1212 21:24:36.831851  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-619406 -n scheduled-stop-619406
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-619406
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-619406 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1212 21:24:53.727046  526818 out.go:360] Setting OutFile to fd 1 ...
	I1212 21:24:53.727367  526818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:24:53.727396  526818 out.go:374] Setting ErrFile to fd 2...
	I1212 21:24:53.727416  526818 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1212 21:24:53.727712  526818 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22112-362983/.minikube/bin
	I1212 21:24:53.728026  526818 out.go:368] Setting JSON to false
	I1212 21:24:53.728170  526818 mustload.go:66] Loading cluster: scheduled-stop-619406
	I1212 21:24:53.728645  526818 config.go:182] Loaded profile config "scheduled-stop-619406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2
	I1212 21:24:53.728780  526818 profile.go:143] Saving config to /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/scheduled-stop-619406/config.json ...
	I1212 21:24:53.729017  526818 mustload.go:66] Loading cluster: scheduled-stop-619406
	I1212 21:24:53.729176  526818 config.go:182] Loaded profile config "scheduled-stop-619406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.2

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-619406
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-619406: exit status 7 (73.543253ms)

                                                
                                                
-- stdout --
	scheduled-stop-619406
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-619406 -n scheduled-stop-619406
E1212 21:25:38.876900  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-619406 -n scheduled-stop-619406: exit status 7 (74.606936ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-619406" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-619406
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-619406: (4.637120286s)
--- PASS: TestScheduledStopUnix (108.88s)

                                                
                                    
x
+
TestInsufficientStorage (13.25s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-510973 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-510973 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (10.620497551s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"439179b3-1256-4472-8bea-7a5367cffa1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-510973] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ae1a8379-3811-408d-b546-ab992fbb8ac6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22112"}}
	{"specversion":"1.0","id":"20638e08-998f-4aaa-b3d9-7784ec38ce1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c52ca63e-d1ac-4f58-b1d0-501b2860e7b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig"}}
	{"specversion":"1.0","id":"ec3be6ab-802c-47cd-b4ea-31c2ff49b105","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube"}}
	{"specversion":"1.0","id":"a15e6a62-3c47-4598-9ce2-74c1a0873092","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dd750dd9-3267-4142-9188-53db832ffe75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a8d47b0e-2ce5-4a74-a5ec-4317a7267f13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"24107974-3f11-43fd-8e54-5786ef7a367a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7973b2a8-3f46-44c7-8f22-29525f9e5017","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"509d44fb-fa17-4970-a710-81d26348e083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"19ea0fa9-8abf-43d7-a946-43945698d620","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-510973\" primary control-plane node in \"insufficient-storage-510973\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d15bc197-f27a-4e6e-a613-4e3d239b9f94","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765505794-22112 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"50ebbb14-3624-438d-87de-762ba0bef874","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"06db2aa6-6400-45c2-9848-e8586f090631","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-510973 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-510973 --output=json --layout=cluster: exit status 7 (323.521191ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-510973","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-510973","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:25:54.448891  528550 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-510973" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-510973 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-510973 --output=json --layout=cluster: exit status 7 (311.619583ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-510973","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-510973","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1212 21:25:54.761835  528614 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-510973" does not appear in /home/jenkins/minikube-integration/22112-362983/kubeconfig
	E1212 21:25:54.773156  528614 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/insufficient-storage-510973/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-510973" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-510973
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-510973: (1.988804112s)
--- PASS: TestInsufficientStorage (13.25s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (304.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.4278900475 start -p running-upgrade-649209 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.4278900475 start -p running-upgrade-649209 --memory=3072 --vm-driver=docker  --container-runtime=crio: (33.304449218s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-649209 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 21:34:36.832766  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:37:35.805360  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:37:44.061081  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-649209 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.793207808s)
helpers_test.go:176: Cleaning up "running-upgrade-649209" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-649209
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-649209: (2.028589611s)
--- PASS: TestRunningBinaryUpgrade (304.60s)

                                                
                                    
x
+
TestMissingContainerUpgrade (116.85s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3504659756 start -p missing-upgrade-992322 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3504659756 start -p missing-upgrade-992322 --memory=3072 --driver=docker  --container-runtime=crio: (1m5.141239781s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-992322
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-992322
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-992322 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-992322 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (46.543156029s)
helpers_test.go:176: Cleaning up "missing-upgrade-992322" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-992322
helpers_test.go:179: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-992322: (2.952154088s)
--- PASS: TestMissingContainerUpgrade (116.85s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-406866 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-406866 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (91.666529ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-406866] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=22112
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22112-362983/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22112-362983/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-406866 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-406866 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (39.96206598s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-406866 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.835713482s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-406866 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-406866 status -o json: exit status 2 (416.359883ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-406866","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-406866
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-406866: (2.90236566s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-406866 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.90010133s)
--- PASS: TestNoKubernetes/serial/Start (9.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22112-362983/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-406866 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-406866 "sudo systemctl is-active --quiet service kubelet": exit status 1 (463.937535ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-arm64 profile list: (2.896329662s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.53s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-406866
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-406866: (1.386294537s)
--- PASS: TestNoKubernetes/serial/Stop (1.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-406866 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-406866 --driver=docker  --container-runtime=crio: (7.896106334s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-406866 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-406866 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.031584ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (304.71s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3364073602 start -p stopped-upgrade-302169 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3364073602 start -p stopped-upgrade-302169 --memory=3072 --vm-driver=docker  --container-runtime=crio: (35.462748465s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3364073602 -p stopped-upgrade-302169 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3364073602 -p stopped-upgrade-302169 stop: (1.269582818s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-302169 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 21:29:36.832037  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:30:59.910494  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:32:35.805584  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-261311/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1212 21:32:44.061004  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/addons-603031/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-302169 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m27.976517756s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (304.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-302169
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-302169: (2.230140186s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.23s)

                                                
                                    
x
+
TestPause/serial/Start (81.8s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-634913 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-634913 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m21.799922668s)
--- PASS: TestPause/serial/Start (81.80s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (27.6s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-634913 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1212 21:39:36.831955  364853 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22112-362983/.minikube/profiles/functional-205528/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-634913 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (27.585402994s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (27.60s)

                                                
                                    

Test skip (36/316)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
16 TestDownloadOnly/v1.34.2/kubectl 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
29 TestDownloadOnlyKic 0.45
31 TestOffline 0
42 TestAddons/serial/GCPAuth/RealCredentials 0.01
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
112 TestFunctional/parallel/MySQL 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
130 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
131 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
132 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
261 TestGvisorAddon 0
283 TestImageBuild 0
284 TestISOImage 0
348 TestChangeNoneUser 0
351 TestScheduledStopWindows 0
353 TestSkaffold 0
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-584504 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:176: Cleaning up "download-docker-584504" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-584504
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
Copied to clipboard